diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9b4c6c69c..a54c0a694 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
# Changelog
+
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
@@ -6,21 +7,29 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [UNRELEASED] - YYYY-MM-DD
+### Added
+
+- [#867](https://github.com/equinor/webviz-subsurface/pull/867) - Added new `SimulationTimeSeries` plugin, with code structure according to best practice plugin example `webviz-plugin-boilerplate` and usage of `EnsembleSummaryProvider`. New functionality as multiple Delta Ensembles in same plot, selectable resampling frequency and possibility to group subplots per selected ensemble or per selected vector.
+
### Changed
+
- [#889](https://github.com/equinor/webviz-subsurface/pull/889) - Added `rel_file_pattern` argument to .arrow related factory methods in EnsembleSummaryProviderFactory.
## [0.2.8] - 2021-12-10
### Fixed
+
- [#877](https://github.com/equinor/webviz-subsurface/pull/877) - Update `WellLogViewer` to work with the latest version of the component.
- [#875](https://github.com/equinor/webviz-subsurface/pull/875) - Fixed an issue with the uncertainty envelope in `Structural Uncertainty` where the plot misbehaved for discontinuous surfaces. A side effect is that percentile calculations are now much faster.
### Added
+
- [#856](https://github.com/equinor/webviz-subsurface/pull/856) - `VolumetricAnalysis` - Added support for comparing sensitivities both within and across ensembles.
- [#721](https://github.com/equinor/webviz-subsurface/pull/721) - Added data provider for reading ensemble summary data through a unified interface, supporting optional lazy resampling/interpolation depending on data input format.
- [#845](https://github.com/equinor/webviz-subsurface/pull/845) - Added realization plot colored by sensitivity to tornado tab in `VolumetricAnalysis`.
### Changed
+
- [#855](https://github.com/equinor/webviz-subsurface/pull/855) - `VolumetricAnalysis` now supports mixing sensitivity and non-sensitivity ensembles.
- [#853](https://github.com/equinor/webviz-subsurface/pull/853) - `ParameterResponseCorrelation` improvements. Constant parameters are removed from the correlation figure, and option to set maximum number of parameters is added. Trendline is added to the scatterplot. Axis in correlation figure is now calculated based on data.
- [#844](https://github.com/equinor/webviz-subsurface/pull/844) - `SeismicMisfit` improvements. Data ranges now follows selected attribute. User defined zooms are now kept during callbacks. New option in slice plot to show individual realizations. Prettyfied all hoverdata. New colorscales. Polygons sorted by name in drop down selector.
@@ -30,22 +39,25 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [0.2.7] - 2021-11-08
### Added
+
- [#851](https://github.com/equinor/webviz-subsurface/pull/851) - Added new column 'SENSNAME_CASE' for improved plotting and filtering of sensitivity ensembles.
- [#825](https://github.com/equinor/webviz-subsurface/pull/825) - Added options to create separate tornado's for e.g Region/Zone in `VolumetricAnalysis`. As well as various improvements to the tornado figure.
- [#734](https://github.com/equinor/webviz-subsurface/pull/645) - New plugin, `SeismicMisfit`, for comparing observed and modelled seismic attributes. Multiple views, including misfit quantification and coverage plots.
- [#809](https://github.com/equinor/webviz-subsurface/pull/809) - `GroupTree` - added more statistical options (P10, P90, P50/Median, Max, Min). Some improvements to the menu layout and behaviour
### Fixed
+
- [#841](https://github.com/equinor/webviz-subsurface/pull/841) - Bugfixes and improved hoverlabels for `Tornado component`.
- [#833](https://github.com/equinor/webviz-subsurface/pull/833) - Fixed errors in `VolumetricAnalysis` related to empty/insufficient data after filtering in the `tornadoplots` and `comparison` tabs.
- [#817](https://github.com/equinor/webviz-subsurface/pull/817) - `DiskUsage` - Fixed formatting error in bar chart tooltip.
- [#820](https://github.com/equinor/webviz-subsurface/pull/820) - `SurfaceWithGridCrossSection` - Fixed an issue with intersecting grids generated with
-`xtgeo==2.15.2`. Grids exported from RMS with this version of xtgeo should be re-exported using a newer version as the subgrid information is incorrect.
+ `xtgeo==2.15.2`. Grids exported from RMS with this version of xtgeo should be re-exported using a newer version as the subgrid information is incorrect.
- [#838](https://github.com/equinor/webviz-subsurface/pull/838) - `AssistedHistoryMatchingAnalysis` - Fixed an issue with output of a callback being used as input in another before the output object was guaranteed to exist.
## [0.2.6] - 2021-10-08
### Added
+
- [#783](https://github.com/equinor/webviz-subsurface/pull/783) - `VolumetricAnalysis` - added tab with Fipfile QC for inspection of which `FIPNUM's` and `REGION∕ZONE's` that have been combined in order to get comparable volumes between dynamic and static sources. This tab is only available if a fipfile is given as input.
- [#777](https://github.com/equinor/webviz-subsurface/pull/777) - `VolumetricAnalysis` - added tabs with `Source comparison` and `Ensemble comparison` as QC tools for quick identification of where and why volumetric changes occur across sources (e.g. static vs dynamic) or ensembles (e.g. model revisions or ahm iterations).
- [#709](https://github.com/equinor/webviz-subsurface/pull/709) - Added `VectorCalculator` component in `ReservoirSimulationTimeSeries` plugin for calculation and graphing of custom simulation time series vectors.
@@ -54,11 +66,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- [#755](https://github.com/equinor/webviz-subsurface/pull/755) - Updated existing and added new tests for the Drogon dataset.
### Changed
+
- [#788](https://github.com/equinor/webviz-subsurface/pull/788) - Prevent mixing volumes from different sensitivities in `VolumetricAnalysis` by not allowing to select more than one sensitivity as a filter unless SENSNAME has been grouped on by the user.
- [#760](https://github.com/equinor/webviz-subsurface/pull/760) - Updated to `Dash 2.0`.
- [#761](https://github.com/equinor/webviz-subsurface/pull/761) - Store `xtgeo.RegularSurface` as bytestream instead of serializing to `json`.
### Fixed
+
- [#802](https://github.com/equinor/webviz-subsurface/pull/802) - Removed `BO` or `BG` as response options for the tornados in `VolumetricAnalysis`, selecting them caused an error.
- [#794](https://github.com/equinor/webviz-subsurface/pull/794) - Fixed an issue in `VolumetricAnalysis` to prevent design matrix runs with only a single montecarlo sensitivity to be interpreted as a sensitivity run.
- [#765](https://github.com/equinor/webviz-subsurface/pull/765) - Use correct inline/xline ranges for axes in `SegyViewer` z-slice graph.
@@ -66,17 +80,21 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- [#791](https://github.com/equinor/webviz-subsurface/pull/791) - Ensure correct map bounds in `SurfaceViewerFMU` when switching between attributes with different geometry.
## [0.2.5] - 2021-09-03
+
### Added
+
- [#733](https://github.com/equinor/webviz-subsurface/pull/733) - Added plugin to visualize well logs from files using [videx-welllog](https://github.com/equinor/videx-wellog).
- [#708](https://github.com/equinor/webviz-subsurface/pull/708) - Added support for new report format for `DiskUsage`, which improves the estimate of free disk space.
### Changed
+
- [#724](https://github.com/equinor/webviz-subsurface/pull/724) - Seperated out Tables as a new tab to `VolumetricAnalysis`
- [#723](https://github.com/equinor/webviz-subsurface/pull/723) - Added custom option to allow free selection of responses shown in the tornadoplots in `VolumetricAnalysis`
- [#717](https://github.com/equinor/webviz-subsurface/pull/717) - Keep zoom state in `ReservoirSimulationTimeseries` (inc `Regional` and `OneByOne`) and `RelativePermeability` plugins using `uirevision`.
-- [#707](https://github.com/equinor/webviz-subsurface/pull/707) - Generalized and improved some plot functions in `PropertyStatistics`, `ParameterAnalysis` and `VolumetricAnalysis`. Replaced histogram with distribution plot in `PropertyStatistics`.
+- [#707](https://github.com/equinor/webviz-subsurface/pull/707) - Generalized and improved some plot functions in `PropertyStatistics`, `ParameterAnalysis` and `VolumetricAnalysis`. Replaced histogram with distribution plot in `PropertyStatistics`.
### Fixed
+
- [#749](https://github.com/equinor/webviz-subsurface/pull/749) - `LinePlotterFMU` check function for `x` axis value alignment across realizations now supports single valued columns.
- [#747](https://github.com/equinor/webviz-subsurface/pull/747) - Added missing realization filter on `OK` file in `EnsembleTableProviderFactory`.
- [#753](https://github.com/equinor/webviz-subsurface/pull/753) - Do not add `Count` column from grid property statistics input data as a selector in `PropertyStatistics`. Handle missing surfaces in `PropertyStatistics`
@@ -84,21 +102,24 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [0.2.4] - 2021-07-13
### Added
+
- [#669](https://github.com/equinor/webviz-subsurface/pull/669) - New generic plugin to visualize tornado plots from a csv file of responses.
- [#685](https://github.com/equinor/webviz-subsurface/pull/685) - Added ERT forward model to convert from `.UNSMRY` to Arrow IPC file format (`.arrow`).
- [#662](https://github.com/equinor/webviz-subsurface/pull/662) - Added support in `WellCompletion` for connection history from summary data.
### Changed
+
- [#681](https://github.com/equinor/webviz-subsurface/pull/681) - `VolumetricAnalysis` upgrades - added page with tornadoplots to `VolumetricAnalysis`, automatic computation of volumes
-from the water zone if the volumes from the full grid geometry are included, and possibility of computing NTG from facies.
+ from the water zone if the volumes from the full grid geometry are included, and possibility of computing NTG from facies.
- [#683](https://github.com/equinor/webviz-subsurface/pull/683) - Added deprecation warning to `InplaceVolumesOneByOne`.
- [#661](https://github.com/equinor/webviz-subsurface/pull/661) - Moved existing clientside function to a general dash_clientside file to facilitate adding more functions later on.
- [#658](https://github.com/equinor/webviz-subsurface/pull/658) - Refactored Tornado figure code to be more reusable. Improved the Tornado bar visualization, added table display
- and improved layout in relevant plugins.
+ and improved layout in relevant plugins.
- [#676](https://github.com/equinor/webviz-subsurface/pull/676) - Added realization points to Tornado visualization. Various improvements to Tornado figure layout.
- [#667](https://github.com/equinor/webviz-subsurface/pull/667) - Standardized layout and styling of plugins.
### Fixed
+
- [#666](https://github.com/equinor/webviz-subsurface/pull/666) - Handle operations between surfaces with different topology in `SurfaceViewerFMU`
- [#675](https://github.com/equinor/webviz-subsurface/pull/675) - Adjust minimum zoom level in surface plugins for visualization of large surfaces.
- [#715](https://github.com/equinor/webviz-subsurface/pull/715) - After this, the `WellCompletion` plugin finds the kh unit even if the unit system is in an INCLUDE file. Also, `well_connection_status_file` refers to the same variable in the plugin and ert job.
@@ -106,50 +127,61 @@ from the water zone if the volumes from the full grid geometry are included, and
## [0.2.3] - 2021-06-07
### Changed
+
- [#651](https://github.com/equinor/webviz-subsurface/pull/651) - Fixed issue with `_` in regions for `ReservoirSimulationTimeseriesRegional`.
- [#642](https://github.com/equinor/webviz-subsurface/pull/642) - New functionality in `WellCompletions`: New stratigraphy input and tree
-selector in filters. Possibility to input colors either in stratigraphy or in the zone_layer_mapping `.lyr`-file. And kh unit automatically
-found in Eclipse files.
+ selector in filters. Possibility to input colors either in stratigraphy or in the zone_layer_mapping `.lyr`-file. And kh unit automatically
+ found in Eclipse files.
### Fixed
+
- [#659](https://github.com/equinor/webviz-subsurface/pull/659) - Added missing `display: block` in option selectors (e.g. radio items).
### Added
+
- [#645](https://github.com/equinor/webviz-subsurface/pull/645) - New generic lineplotter plugin for FMU data. This is the first plugin that uses
-a new system to reduce the memory footprint of large datasets.
+ a new system to reduce the memory footprint of large datasets.
- [#641](https://github.com/equinor/webviz-subsurface/pull/641) - New plugin to analyze volumetrics results from FMU ensembles, replaces the `InplaceVolumes` plugin.
## [0.2.2] - 2021-04-30
### Changed
+
- [#618](https://github.com/equinor/webviz-subsurface/pull/618) - Added deprecation warning to `HorizonUncertaintyViewer`,
-`WellCrossSection` and `WellCrossSectionFMU`. These plugins will soon be removed. Relevant functionality is implememented
-in the new `StructuralUncertainty` plugin.
+ `WellCrossSection` and `WellCrossSectionFMU`. These plugins will soon be removed. Relevant functionality is implememented
+ in the new `StructuralUncertainty` plugin.
- [#646](https://github.com/equinor/webviz-subsurface/pull/646) - Replaced `DropDowns` in `ReservoirSimulationTimeSeries` plugin with `VectorSelector` components.
### Fixed
+
- [#621](https://github.com/equinor/webviz-subsurface/pull/621) - Fixed issue in `StructuralUncertainty` where map base layers did not load
-correctly from persisted user settings.
+ correctly from persisted user settings.
- [#626](https://github.com/equinor/webviz-subsurface/pull/626) - Fixed small bugs in the docstring of `WellCompletions` and added a tour_steps method.
## [0.2.1] - 2021-04-27
+
### Changed
+
- [#612](https://github.com/equinor/webviz-subsurface/pull/612) - New features in `ReservoirSimulationTimeSeries`: Statistical lines, option to remove history trace, histogram available when plotting individual realizations.
### Fixed
+
- [#615](https://github.com/equinor/webviz-subsurface/pull/615) - Improve table performance of `AssistedHistoryMatchingAnalysis`.
### Added
+
- [#605](https://github.com/equinor/webviz-subsurface/pull/605) - New plugin to analyze structural uncertainty from FMU ensembles.
- [#610](https://github.com/equinor/webviz-subsurface/pull/610) - New plugin `WellCompletions` to visualize completion data of simulation wells.
## [0.2.0] - 2021-03-28
+
- [#604](https://github.com/equinor/webviz-subsurface/pull/604) - Consolidates surface loading and statistical calculation of surfaces by introducing a shared
-SurfaceSetModel. Refactored SurfaceViewerFMU to use SurfaceSetModel.
+ SurfaceSetModel. Refactored SurfaceViewerFMU to use SurfaceSetModel.
- [#586](https://github.com/equinor/webviz-subsurface/pull/586) - Added phase ratio vs pressure and density vs pressure plots. Added unit and density functions to PVT library. Refactored code and added checklist for plots to be viewed in PVT plot plugin. Improved the layout.
- [#599](https://github.com/equinor/webviz-subsurface/pull/599) - Fixed an issue in ParameterAnalysis where the plugin did not initialize without FIELD vectors
### Fixed
+
- [#602](https://github.com/equinor/webviz-subsurface/pull/602) - Prevent calculation of data for download at initialisation of ReservoirSimulationTimeSeries.
- [#592](https://github.com/equinor/webviz-subsurface/pull/592) - Fixed bug for inferred frequency of yearly summary data.
- [#594](https://github.com/equinor/webviz-subsurface/pull/594) - Fixed bug in SurfaceViewerFMU where surfaces with only undefined values was not handled properly.
@@ -157,52 +189,68 @@ SurfaceSetModel. Refactored SurfaceViewerFMU to use SurfaceSetModel.
- [#595](https://github.com/equinor/webviz-subsurface/pull/595) - Raise a descriptive error in SurfaceViewerFMU plugin if no surfaces are available.
## [0.1.9] - 2021-02-23
+
### Fixed
+
- [#569](https://github.com/equinor/webviz-subsurface/pull/569) - Allow sharing of ensemble smry datasets in memory between plugins instances. Note that currently sharing can only be accomplished between plugin instances that use the same ensembles, column_keys and time_index.
- [#552](https://github.com/equinor/webviz-subsurface/pull/552) - Fixed an issue where webvizstore was not properly initialized in ParameterAnalysis plugin
- [#549](https://github.com/equinor/webviz-subsurface/pull/549) - Fixed issue in WellCrossSectionFMU that prevented use of user provided colors.
- [#561](https://github.com/equinor/webviz-subsurface/pull/561) - Fixed issue in ParameterAnalysis for non-numeric parameters (dropping them).
## [0.1.8] - 2021-01-26
+
### Changed
+
- [#538](https://github.com/equinor/webviz-subsurface/issues/538) - Refactored code for reading Eclipse INIT files and added framework for units and unit conversions.
- [#544](https://github.com/equinor/webviz-subsurface/pull/544) - All plugins now use new special `webviz_settings` argument to plugin's `__init__` method for common settings in favor of piggybacking dictionary onto the to the Dash applicaton object.
- [#541](https://github.com/equinor/webviz-subsurface/pull/541) - Implemented new onepass shader for all surface plugins.
### Fixed
+
- [#536](https://github.com/equinor/webviz-subsurface/pull/536) - Fixed issue and bumped dependencies related to Pandas version 1.2.0. Bumped dependency to webviz-config to support mypy typechecks.
## [0.1.7] - 2020-12-19
+
### Fixed
+
- [#526](https://github.com/equinor/webviz-subsurface/pull/526) - Fixes to `SurfaceViewerFMU`. User defined map units are now correctly displayed. Map height can now be set (useful for maps with elongated geometry). Added some missing documentation
- [#531](https://github.com/equinor/webviz-subsurface/pull/531) - The change in [#505](https://github.com/equinor/webviz-subsurface/pull/505) resulted in potentially very large datasets when using `raw` sampling. Some users experienced `MemoryError`. `column_keys` filtering is therefore now used when loading and storing data if `sampling` is `raw` in plugins using `UNSMRY` data, most noticable in `BhpQc` which has `raw` as the default and only option.
### Added
+
- [#529](https://github.com/equinor/webviz-subsurface/pull/529) - Added support for PVDO and PVTG to PVT plot and to respective data modules.
-- [#509](https://github.com/equinor/webviz-subsurface/pull/509) - Added descriptive hoverinfo to `ParameterAnalysis`. Average and standard deviation of parameter value
-for each ensemble shown on mouse hover over figure. Included dynamic sizing of plot titles and plot spacing to optimize the appearance of plots when many parameters are plotted.
+- [#509](https://github.com/equinor/webviz-subsurface/pull/509) - Added descriptive hoverinfo to `ParameterAnalysis`. Average and standard deviation of parameter value
+ for each ensemble shown on mouse hover over figure. Included dynamic sizing of plot titles and plot spacing to optimize the appearance of plots when many parameters are plotted.
## [0.1.6] - 2020-11-30
+
### Fixed
+
- [#505](https://github.com/equinor/webviz-subsurface/pull/505) - Fixed recent performance regression issue for loading of UNSMRY data. Loading times when multiple plugins are using the same data is now significantly reduced. Note that all UNSMRY vectors are now stored in portable apps, independent of choice of column_keys in individual plugins.
## [0.1.5] - 2020-11-26
+
### Added
+
- [#478](https://github.com/equinor/webviz-subsurface/pull/478) - New plugin `AssistedHistoryMatchingAnalysis`. This dashboard helps to analyze the update step performed during assisted history match. E.g. which observations are causing an update in a specific parameter. Based on Kolmogorov–Smirnov.
- [#494](https://github.com/equinor/webviz-subsurface/pull/494) - New plugin `ParameterAnalysis`. Dashboard to visualize parameter distributions and statistics for FMU ensembles, and to investigate parameter correlations on reservoir simulation time series data.
### Fixed
+
- [#486](https://github.com/equinor/webviz-subsurface/pull/486) - Bug fix in `PropertyStatistics`. Show realization number instead of dataframe index for hover text.
- [#498](https://github.com/equinor/webviz-subsurface/pull/498) - Bug fix in `RFT-plotter`. Sort dataframe by date to get correct order in date-slider.
## [0.1.4] - 2020-10-29
+
### Added
+
- [#457](https://github.com/equinor/webviz-subsurface/pull/457) - Raise a descriptive error if a scratch ensemble is empty, i.e. no `OK` target file is found in any realizations.
- [#427](https://github.com/equinor/webviz-subsurface/pull/427) - `BhpQc` plugin added: Quality check that simulated bottom hole pressures are realistic.
- [#481](https://github.com/equinor/webviz-subsurface/pull/481) - `RFT-plotter`: Added support for MD, and made ECLIPSE RFT data optional.
- [#467](https://github.com/equinor/webviz-subsurface/pull/467) - `PropertyStatistics` plugin added: QC and analysis of grid property statistics.
### Fixed
+
- [#450](https://github.com/equinor/webviz-subsurface/pull/450) - Flipped colormap for subsurface maps (such that deeper areas get darker colors). Also fixed hill shading such that input values are treated as depth, not positive elevation.
- [#459](https://github.com/equinor/webviz-subsurface/pull/459) - Bug fix in ReservoirSimulationTimeSeries. All `History` traces are now toggled when clicking `History` in the legend.
- [#474](https://github.com/equinor/webviz-subsurface/pull/474) - Bug fix in ParameterCorrelation. Constant parameters are now removed if `drop_constants` is set to `True`
@@ -210,10 +258,12 @@ for each ensemble shown on mouse hover over figure. Included dynamic sizing of p
- [#482](https://github.com/equinor/webviz-subsurface/pull/482) - Bug fix in ReservoirSimulationTimeSeries: NaN values are now dropped instead of being replaced by zeros, e.g. if some realizations are missing in one of the ensembles, if the dates don't match, or if a vector is missing in one of the ensembles.
## [0.1.3] - 2020-09-24
+
### Added
+
- [#417](https://github.com/equinor/webviz-subsurface/pull/417) - Added an optional argument `--testdata-folder` to `pytest`, can be used when [test data](https://github.com/equinor/webviz-subsurface-testdata) is in non-default location.
- [#422](https://github.com/equinor/webviz-subsurface/pull/422) - `HistoryMatch` plugin now
-quietly excludes all realizations lacking an `OK` file written by `ERT` on completion of realization workflow, similar to behavior of other plugins that read from individual realizations. Previously wrote warnings for missing data.
+ quietly excludes all realizations lacking an `OK` file written by `ERT` on completion of realization workflow, similar to behavior of other plugins that read from individual realizations. Previously wrote warnings for missing data.
- [#428](https://github.com/equinor/webviz-subsurface/pull/428) - Plugin controls, such as dropdown selections, set by the user is kept on page reload.
- [#435](https://github.com/equinor/webviz-subsurface/pull/435) - Suppress a warning in SurfaceViewerFMU when calculating statistics from surfaces where one or more surface only has NaN values. [#399](https://github.com/equinor/webviz-subsurface/pull/399)
- [#438](https://github.com/equinor/webviz-subsurface/pull/438) - Improved documentation of generation of data input for `RelativePermability` plugin.
@@ -221,10 +271,13 @@ quietly excludes all realizations lacking an `OK` file written by `ERT` on compl
- [#439](https://github.com/equinor/webviz-subsurface/pull/439) - Pie chart and bar chart are now visualized together in `DiskUsage`. Free space is now visualized as well.
### Fixed
+
- [#432](https://github.com/equinor/webviz-subsurface/pull/432) - Bug fix in ReservoirSimulationTimeSeries. Vectors starting with A, V, G, I, N, T, V and L resulted in crash due to a bug introduced in [#373](https://github.com/equinor/webviz-subsurface/pull/373) (most notably group and aquifer vectors).
- [#442](https://github.com/equinor/webviz-subsurface/pull/442) - Bug fix in ReservoirSimulationTimeSeries. Wrong realization number was shown if data set contained missing realizations. Now uses correct realization number from data.
- [#447](https://github.com/equinor/webviz-subsurface/pull/447) - Changed two `webvizstore` decorated functions such that they do not take in `pandas` objects as arguments, which are known to not have `repr()` useful for hashing.
## [0.1.2] - 2020-08-24
+
### Changed
+
- [#415](https://github.com/equinor/webviz-subsurface/pull/415) - Now using `xml` package from standard Python library (together with [`defusexml`](https://pypi.org/project/defusedxml/)) instead of [`bs4`](https://pypi.org/project/beautifulsoup4/).
diff --git a/setup.py b/setup.py
index 259e9f044..1a9e20831 100644
--- a/setup.py
+++ b/setup.py
@@ -60,6 +60,7 @@
"RftPlotter = webviz_subsurface.plugins:RftPlotter",
"RunningTimeAnalysisFMU = webviz_subsurface.plugins:RunningTimeAnalysisFMU",
"SegyViewer = webviz_subsurface.plugins:SegyViewer",
+ "SimulationTimeSeries = webviz_subsurface.plugins:SimulationTimeSeries",
"SeismicMisfit = webviz_subsurface.plugins:SeismicMisfit",
"StructuralUncertainty = webviz_subsurface.plugins:StructuralUncertainty",
"SubsurfaceMap = webviz_subsurface.plugins:SubsurfaceMap",
diff --git a/webviz_subsurface/_utils/fanchart_plotting.py b/webviz_subsurface/_utils/fanchart_plotting.py
index f7f59884d..6d07386ec 100644
--- a/webviz_subsurface/_utils/fanchart_plotting.py
+++ b/webviz_subsurface/_utils/fanchart_plotting.py
@@ -10,7 +10,7 @@
@dataclass
class FreeLineData:
"""
- Dataclass for defining statistics data for freee line trace in fanchart
+ Dataclass for defining statistics data for free line trace in fanchart
`Attributes:`
* `name` - Name of statistics data
@@ -86,32 +86,33 @@ def validate_fanchart_data(data: FanchartData) -> None:
Raise ValueError if lengths are unequal
"""
- if len(data.samples) <= 0:
+ samples_length = len(data.samples)
+ if samples_length <= 0:
raise ValueError("Empty x-axis data list in FanchartData")
- if data.free_line is not None and len(data.samples) != len(data.free_line.data):
+ if data.free_line is not None and samples_length != len(data.free_line.data):
raise ValueError(
"Invalid fanchart mean value data length. len(data.samples) != len(free_line.data)"
)
- if data.minimum_maximum is not None and len(data.samples) != len(
+ if data.minimum_maximum is not None and samples_length != len(
data.minimum_maximum.minimum
):
raise ValueError(
"Invalid fanchart minimum value data length. len(data.samples) "
"!= len(data.minimum_maximum.minimum)"
)
- if data.minimum_maximum is not None and len(data.samples) != len(
+ if data.minimum_maximum is not None and samples_length != len(
data.minimum_maximum.maximum
):
raise ValueError(
"Invalid fanchart maximum value data length. len(data.samples) != "
"len(data.minimum_maximum.maximum)"
)
- if data.low_high is not None and len(data.samples) != len(data.low_high.low_data):
+ if data.low_high is not None and samples_length != len(data.low_high.low_data):
raise ValueError(
"Invalid fanchart low percentile value data length. len(data.samples) "
"!= len(data.low_high.low_data)"
)
- if data.low_high is not None and len(data.samples) != len(data.low_high.high_data):
+ if data.low_high is not None and samples_length != len(data.low_high.high_data):
raise ValueError(
"Invalid fanchart high percentile value data length. "
"len(data.samples) != len(data.low_high.high_data)"
@@ -134,6 +135,7 @@ def get_fanchart_traces(
hovertext: str = "",
hovertemplate: Optional[str] = None,
hovermode: Optional[str] = None,
+ legendrank: Optional[int] = None,
) -> List[Dict[str, Any]]:
"""
Utility function for creating statistical fanchart traces
@@ -184,6 +186,8 @@ def get_default_trace(statistics_name: str, values: np.ndarray) -> Dict[str, Any
"legendgroup": legend_group,
"showlegend": False,
}
+ if legendrank:
+ trace["legendrank"] = legendrank
if not show_hoverinfo:
trace["hoverinfo"] = "skip"
return trace
diff --git a/webviz_subsurface/_utils/statistics_plotting.py b/webviz_subsurface/_utils/statistics_plotting.py
new file mode 100644
index 000000000..026f2c03e
--- /dev/null
+++ b/webviz_subsurface/_utils/statistics_plotting.py
@@ -0,0 +1,234 @@
+from dataclasses import dataclass, field
+from typing import Any, Dict, List, Optional
+
+import numpy as np
+
+
+@dataclass
+class LineData:
+ """
+ Definition of line trace data for statistics plot
+
+ `Attributes:`
+ * `data` - 1D np.array of value data
+ * `name` - Name of line data
+ """
+
+ data: np.ndarray
+ name: str
+
+
+@dataclass
+class StatisticsData:
+ """
+ Dataclass defining statistics data utilized in creation of statistical plot traces
+
+ `Attributes:`
+ * `samples` - Common sample point list for each following value list.
+ * `free_line` - LineData with name and value data for free line trace in statistics plot
+ (e.g. mean, median, etc.)
+ * `minimum` - Optional 1D np.array of minimum value data for statistics plot
+ * `maximum` - Optional 1D np.array of maximum value data for statistics plot
+ * `low` - Optional low percentile, name and 1D np.array data for statistics plot
+ * `mid` - Optional middle percentile, name and 1D np.array data for statistics plot
+ * `high` - Optional high percentile, name and 1D np.array data for statistics plot
+
+ """
+
+ # TODO:
+ # - Rename mid percentile, find better name?
+ # - Consider to replace all lines with List[LineData], where each free line must be
+ # named and provided data.
+ # - Can then be used for individual realization plots as well?
+ # - One suggestion: Create base class with: samples: list, free_lines: List[LineData]
+ # and inherit for "StatisticsData". Base class can be utilized for realization plots?
+
+ samples: list = field(default_factory=list)
+ free_line: Optional[LineData] = None
+ minimum: Optional[np.ndarray] = None
+ maximum: Optional[np.ndarray] = None
+ low: Optional[LineData] = None
+ high: Optional[LineData] = None
+ mid: Optional[LineData] = None
+
+
+def validate_statistics_data(data: StatisticsData) -> None:
+ """
+ Validation of statistics data
+
+ Ensure equal length of all statistical data lists and x-axis data list
+
+ Raise ValueError if lengths are unequal
+ """
+ samples_length = len(data.samples)
+ if samples_length <= 0:
+ raise ValueError("Empty x-axis data list in StatisticsData")
+ if data.free_line is not None and samples_length != len(data.free_line.data):
+ raise ValueError(
+ "Invalid statistics mean value data length. len(data.samples) != len(free_line.data)"
+ )
+ if data.minimum is not None and samples_length != len(data.minimum):
+ raise ValueError(
+ "Invalid statistics minimum value data length. len(data.samples) "
+ "!= len(data.minimum)"
+ )
+ if data.maximum is not None and samples_length != len(data.maximum):
+ raise ValueError(
+ "Invalid statistics maximum value data length. len(data.samples) != "
+ "len(data.maximum)"
+ )
+ if data.low is not None and samples_length != len(data.low.data):
+ raise ValueError(
+ "Invalid statistics low percentile value data length. len(data.samples) "
+ "!= len(data.low.data)"
+ )
+ if data.mid is not None and samples_length != len(data.mid.data):
+ raise ValueError(
+ "Invalid statistics middle percentile value data length. len(data.samples) "
+ "!= len(data.mid.data)"
+ )
+ if data.high is not None and samples_length != len(data.high.data):
+ raise ValueError(
+ "Invalid statistics high percentile value data length. "
+ "len(data.samples) != len(data.high.data)"
+ )
+
+
+# pylint: disable=too-many-arguments
+# pylint: disable=too-many-locals
+def create_statistics_traces(
+ data: StatisticsData,
+ color: str,
+ legend_group: str,
+ legend_name: Optional[str] = None,
+ line_shape: str = "linear",
+ xaxis: str = "x",
+ yaxis: str = "y",
+ show_legend: bool = True,
+ show_hoverinfo: bool = True,
+ hovertext: str = "",
+ hovertemplate: Optional[str] = None,
+ hovermode: Optional[str] = None,
+ legendrank: Optional[int] = None,
+) -> List[Dict[str, Any]]:
+ """
+ Utility function for creating statistical plot traces
+
+ Takes `data` containing data for each statistical feature as input, and creates a list of traces
+ for each feature. Plotly plots traces from front to end of the list, thereby the last trace is
+ plotted on top.
+
+ Note that the data is optional, which implies that only wanted statistical features needs to be
+ provided for trace plot generation.
+
+ The function provides a list of traces: [trace0, tract1, ..., traceN]
+
+ Note:
+ If hovertemplate is proved it overrides the hovertext
+
+ Returns:
+ List of statistical line traces, one for each statistical feature in data input.
+ [trace0, tract1, ..., traceN].
+ """
+
+ validate_statistics_data(data)
+
+ def get_default_trace(statistics_name: str, values: np.ndarray) -> Dict[str, Any]:
+ trace = {
+ "name": legend_name if legend_name else legend_group,
+ "x": data.samples,
+ "y": values,
+ "xaxis": xaxis,
+ "yaxis": yaxis,
+ "mode": "lines",
+ "line": {"width": 1, "color": color, "shape": line_shape},
+ "legendgroup": legend_group,
+ "showlegend": False,
+ }
+ if legendrank:
+ trace["legendrank"] = legendrank
+ if not show_hoverinfo:
+ trace["hoverinfo"] = "skip"
+ return trace
+ if hovertemplate is not None:
+ trace["hovertemplate"] = hovertemplate + statistics_name
+ else:
+ trace["hovertext"] = statistics_name + " " + hovertext
+ if hovermode is not None:
+ trace["hovermode"] = hovermode
+ return trace
+
+ traces: List[Dict[str, Any]] = []
+
+ # Minimum
+ if data.minimum is not None:
+ minimum_trace = get_default_trace(
+ statistics_name="Minimum",
+ values=data.minimum,
+ )
+ minimum_trace["line"] = {
+ "color": color,
+ "shape": line_shape,
+ "dash": "longdash",
+ "width": 1.5,
+ }
+ traces.append(minimum_trace)
+
+ # Low percentile
+ if data.low is not None:
+ low_trace = get_default_trace(
+ statistics_name=data.low.name, values=data.low.data
+ )
+ low_trace["line"] = {"color": color, "shape": line_shape, "dash": "dashdot"}
+ traces.append(low_trace)
+
+ # Mid percentile
+ if data.mid is not None:
+ mid_trace = get_default_trace(
+ statistics_name=data.mid.name, values=data.mid.data
+ )
+ mid_trace["line"] = {
+ "color": color,
+ "shape": line_shape,
+ "dash": "dot",
+ "width": 3,
+ }
+ traces.append(mid_trace)
+
+ # High percentile
+ if data.high is not None:
+ high_trace = get_default_trace(
+ statistics_name=data.high.name, values=data.high.data
+ )
+ high_trace["line"] = {"color": color, "shape": line_shape, "dash": "dashdot"}
+ traces.append(high_trace)
+
+ # Maximum
+ if data.maximum is not None:
+ maximum_trace = get_default_trace(
+ statistics_name="Maximum",
+ values=data.maximum,
+ )
+ maximum_trace["line"] = {
+ "color": color,
+ "shape": line_shape,
+ "dash": "longdash",
+ "width": 1.5,
+ }
+ traces.append(maximum_trace)
+
+ # Free line
+ if data.free_line is not None:
+ line_trace = get_default_trace(
+ statistics_name=data.free_line.name,
+ values=data.free_line.data,
+ )
+ # Set solid line
+ line_trace["line"] = {"color": color, "shape": line_shape}
+ traces.append(line_trace)
+
+ # Set legend for last trace in list
+ if len(traces) > 0:
+ traces[-1]["showlegend"] = show_legend
+
+ return traces
diff --git a/webviz_subsurface/_utils/vector_calculator.py b/webviz_subsurface/_utils/vector_calculator.py
index a9543e874..4de7f7cc4 100644
--- a/webviz_subsurface/_utils/vector_calculator.py
+++ b/webviz_subsurface/_utils/vector_calculator.py
@@ -1,6 +1,6 @@
import sys
from pathlib import Path
-from typing import Dict, List, Optional, Tuple, Union
+from typing import Dict, List, Optional, Sequence, Tuple, Union
from uuid import uuid4
import numpy as np
@@ -14,7 +14,12 @@
VectorCalculator,
)
-from .vector_selector import is_vector_name_in_vector_selector_data
+from webviz_subsurface._providers import EnsembleSummaryProvider, Frequency
+
+from .vector_selector import (
+ add_vector_to_vector_selector_data,
+ is_vector_name_in_vector_selector_data,
+)
if sys.version_info >= (3, 8):
from typing import TypedDict
@@ -276,6 +281,46 @@ def get_calculated_vector_df(
return df[columns + [name]]
+def create_calculated_vector_df(
+ expression: ExpressionInfo,
+ provider: EnsembleSummaryProvider,
+ realizations: Optional[Sequence[int]],
+ resampling_frequency: Optional[Frequency],
+) -> pd.DataFrame:
+ """Create dataframe with calculated vector from expression
+
+ If expression is not successfully evaluated, empty dataframe is returned
+
+ `Return:`
+ * Dataframe with calculated vector data made form expression - columns:\n
+ ["DATE","REAL", calculated_vector]
+ * Return empty dataframe if expression evaluation returns None
+ """
+ name: str = expression["name"]
+ expr: str = expression["expression"]
+
+ variable_vector_dict: Dict[str, str] = VectorCalculator.variable_vector_dict(
+ expression["variableVectorMap"]
+ )
+ vector_names = list(variable_vector_dict.values())
+
+ # Retrieve data for vectors in expression
+ vectors_df = provider.get_vectors_df(
+ vector_names, resampling_frequency, realizations
+ )
+
+ values: Dict[str, np.ndarray] = {}
+ for variable, vector in variable_vector_dict.items():
+ values[variable] = vectors_df[vector].values
+
+ evaluated_expression = VectorCalculator.evaluate_expression(expr, values)
+ if evaluated_expression is not None:
+ vectors_df[name] = evaluated_expression
+ return vectors_df[["DATE", "REAL", name]]
+
+ return pd.DataFrame()
+
+
@CACHE.memoize(timeout=CACHE.TIMEOUT)
def get_calculated_units(
expressions: List[ExpressionInfo],
@@ -308,3 +353,43 @@ def get_calculated_units(
except ValueError:
continue
return calculated_units
+
+
+def add_calculated_vector_to_vector_selector_data(
+ vector_selector_data: list,
+ vector_name: str,
+ description: Optional[str] = None,
+) -> None:
+ """Add calculated vector name and descritpion to vector selector data
+
+ Description is optional, and will be added at last node
+ """
+ description_str = description if description is not None else ""
+ add_vector_to_vector_selector_data(
+ vector_selector_data=vector_selector_data,
+ vector=vector_name,
+ description=description_str,
+ description_at_last_node=True,
+ )
+
+
+def add_expressions_to_vector_selector_data(
+ vector_selector_data: list, expressions: List[ExpressionInfo]
+) -> None:
+ """Add expressions to vector selector data
+
+ Adds calculated vector name into node structure. Adds expression
+ description if existing.
+ """
+ for expression in expressions:
+ if not expression["isValid"]:
+ continue
+
+ name = expression["name"]
+ description = None
+ if "description" in expression.keys():
+ description = expression["description"]
+
+ add_calculated_vector_to_vector_selector_data(
+ vector_selector_data, name, description
+ )
diff --git a/webviz_subsurface/plugins/__init__.py b/webviz_subsurface/plugins/__init__.py
index 8b81491f4..ceb00951c 100644
--- a/webviz_subsurface/plugins/__init__.py
+++ b/webviz_subsurface/plugins/__init__.py
@@ -49,6 +49,7 @@
from ._running_time_analysis_fmu import RunningTimeAnalysisFMU
from ._segy_viewer import SegyViewer
from ._seismic_misfit import SeismicMisfit
+from ._simulation_time_series import SimulationTimeSeries
from ._structural_uncertainty import StructuralUncertainty
from ._subsurface_map import SubsurfaceMap
from ._surface_viewer_fmu import SurfaceViewerFMU
diff --git a/webviz_subsurface/plugins/_simulation_time_series/__init__.py b/webviz_subsurface/plugins/_simulation_time_series/__init__.py
new file mode 100644
index 000000000..5c05c78ce
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/__init__.py
@@ -0,0 +1 @@
+from ._plugin import SimulationTimeSeries
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_callbacks.py b/webviz_subsurface/plugins/_simulation_time_series/_callbacks.py
new file mode 100644
index 000000000..f0bf06221
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_callbacks.py
@@ -0,0 +1,749 @@
+import copy
+from typing import Callable, Dict, List, Optional, Tuple, Union
+
+import dash
+import pandas as pd
+import webviz_subsurface_components as wsc
+from dash.dependencies import Input, Output, State
+from dash.exceptions import PreventUpdate
+from webviz_config import EncodedFile, WebvizPluginABC
+from webviz_config._theme_class import WebvizConfigTheme
+from webviz_subsurface_components import ExpressionInfo, ExternalParseData
+
+from webviz_subsurface._providers import Frequency
+from webviz_subsurface._utils.unique_theming import unique_colors
+from webviz_subsurface._utils.vector_calculator import (
+ add_expressions_to_vector_selector_data,
+ get_custom_vector_definitions_from_expressions,
+ get_selected_expressions,
+)
+from webviz_subsurface._utils.vector_selector import (
+ is_vector_name_in_vector_selector_data,
+)
+
+from ._layout import LayoutElements
+from ._property_serialization import (
+ EnsembleSubplotBuilder,
+ GraphFigureBuilderBase,
+ VectorSubplotBuilder,
+)
+from .types import (
+ DeltaEnsemble,
+ DerivedVectorsAccessor,
+ FanchartOptions,
+ ProviderSet,
+ StatisticsOptions,
+ SubplotGroupByOptions,
+ TraceOptions,
+ VisualizationOptions,
+)
+from .utils.delta_ensemble_utils import create_delta_ensemble_names
+from .utils.derived_ensemble_vectors_accessor_utils import (
+ create_derived_vectors_accessor_dict,
+)
+from .utils.history_vectors import create_history_vectors_df
+from .utils.provider_set_utils import create_vector_plot_titles_from_provider_set
+from .utils.trace_line_shape import get_simulation_line_shape
+from .utils.vector_statistics import create_vectors_statistics_df
+
+
+# pylint: disable = too-many-arguments, too-many-branches, too-many-locals, too-many-statements
+def plugin_callbacks(
+ app: dash.Dash,
+ get_uuid: Callable,
+ get_data_output: Output,
+ get_data_requested: Input,
+ input_provider_set: ProviderSet,
+ theme: WebvizConfigTheme,
+ initial_selected_vectors: List[str],
+ vector_selector_base_data: list,
+ observations: dict, # TODO: Improve typehint?
+ line_shape_fallback: str = "linear",
+) -> None:
+ # TODO: Consider adding: presampled_frequency: Optional[Frequency] argument for use when
+ # providers are presampled. To keep track of sampling frequency, and not depend on dropdown
+ # value for ViewElements.RESAMPLING_FREQUENCY_DROPDOWN (dropdown disabled when providers are
+ # presampled)
+ @app.callback(
+ Output(get_uuid(LayoutElements.GRAPH), "figure"),
+ [
+ Input(
+ get_uuid(LayoutElements.VECTOR_SELECTOR),
+ "selectedNodes",
+ ),
+ Input(get_uuid(LayoutElements.ENSEMBLES_DROPDOWN), "value"),
+ Input(
+ get_uuid(LayoutElements.VISUALIZATION_RADIO_ITEMS),
+ "value",
+ ),
+ Input(
+ get_uuid(LayoutElements.PLOT_STATISTICS_OPTIONS_CHECKLIST),
+ "value",
+ ),
+ Input(
+ get_uuid(LayoutElements.PLOT_FANCHART_OPTIONS_CHECKLIST),
+ "value",
+ ),
+ Input(
+ get_uuid(LayoutElements.PLOT_TRACE_OPTIONS_CHECKLIST),
+ "value",
+ ),
+ Input(
+ get_uuid(LayoutElements.SUBPLOT_OWNER_OPTIONS_RADIO_ITEMS),
+ "value",
+ ),
+ Input(
+ get_uuid(LayoutElements.RESAMPLING_FREQUENCY_DROPDOWN),
+ "value",
+ ),
+ Input(
+ get_uuid(LayoutElements.GRAPH_DATA_HAS_CHANGED_TRIGGER),
+ "data",
+ ),
+ ],
+ [
+ State(
+ get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLES),
+ "data",
+ ),
+ State(
+ get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS),
+ "data",
+ ),
+ State(get_uuid(LayoutElements.ENSEMBLES_DROPDOWN), "options"),
+ ],
+ )
+ def _update_graph(
+ vectors: List[str],
+ selected_ensembles: List[str],
+ visualization_value: str,
+ statistics_option_values: List[str],
+ fanchart_option_values: List[str],
+ trace_option_values: List[str],
+ subplot_owner_options_value: str,
+ resampling_frequency_value: str,
+ __graph_data_has_changed_trigger: int,
+ delta_ensembles: List[DeltaEnsemble],
+ vector_calculator_expressions: List[ExpressionInfo],
+ ensemble_dropdown_options: List[dict],
+ ) -> dict:
+ """Callback to update all graphs based on selections
+
+ * De-serialize from JSON serializable format to strongly typed and filtered format
+ * Business logic:
+ * Functionality with "strongly typed" and filtered input format - functions and
+ classes
+ * ProviderSet for EnsembleSummaryProviders, i.e. input_provider_set
+ * DerivedEnsembleVectorsAccessor to access derived vector data from ensembles
+ with single providers or delta ensemble with two providers
+ * GraphFigureBuilder to create graph with subplots per vector or subplots per
+ ensemble, using VectorSubplotBuilder and EnsembleSubplotBuilder, respectively
+ * Create/build prop serialization in FigureBuilder by use of business logic data
+
+ NOTE: __graph_data_has_changed_trigger is only used to trigger callback when change of
+ graphs data has changed and re-render of graph is necessary. E.g. when a selected expression
+ from the VectorCalculatorgets edited, without changing the expression name - i.e.
+ VectorSelector selectedNodes remain unchanged.
+ """
+ if vectors is None:
+ vectors = initial_selected_vectors
+
+ # Retrieve the selected expressions
+ selected_expressions = get_selected_expressions(
+ vector_calculator_expressions, vectors
+ )
+
+ # Convert from string values to enum types
+ visualization = VisualizationOptions(visualization_value)
+ statistics_options = [
+ StatisticsOptions(elm) for elm in statistics_option_values
+ ]
+ fanchart_options = [FanchartOptions(elm) for elm in fanchart_option_values]
+ trace_options = [TraceOptions(elm) for elm in trace_option_values]
+ subplot_owner = SubplotGroupByOptions(subplot_owner_options_value)
+ resampling_frequency = Frequency.from_string_value(resampling_frequency_value)
+ all_ensemble_names = [option["value"] for option in ensemble_dropdown_options]
+
+ if not isinstance(selected_ensembles, list):
+ raise TypeError("ensembles should always be of type list")
+
+ # Create dict of derived vectors accessors for selected ensembles
+ derived_vectors_accessors: Dict[
+ str, DerivedVectorsAccessor
+ ] = create_derived_vectors_accessor_dict(
+ ensembles=selected_ensembles,
+ vectors=vectors,
+ provider_set=input_provider_set,
+ expressions=selected_expressions,
+ delta_ensembles=delta_ensembles,
+ resampling_frequency=resampling_frequency,
+ )
+
+ # TODO: How to get metadata for calculated vector?
+ vector_line_shapes: Dict[str, str] = {
+ vector: get_simulation_line_shape(
+ line_shape_fallback,
+ vector,
+ input_provider_set.vector_metadata(vector),
+ )
+ for vector in vectors
+ }
+
+ figure_builder: GraphFigureBuilderBase
+ if subplot_owner is SubplotGroupByOptions.VECTOR:
+ # Create unique colors based on all ensemble names to preserve consistent colors
+ ensemble_colors = unique_colors(all_ensemble_names, theme)
+ vector_titles = create_vector_plot_titles_from_provider_set(
+ vectors, selected_expressions, input_provider_set
+ )
+ figure_builder = VectorSubplotBuilder(
+ vectors,
+ vector_titles,
+ ensemble_colors,
+ resampling_frequency,
+ vector_line_shapes,
+ theme,
+ )
+ elif subplot_owner is SubplotGroupByOptions.ENSEMBLE:
+ vector_colors = unique_colors(vectors, theme)
+ figure_builder = EnsembleSubplotBuilder(
+ vectors,
+ selected_ensembles,
+ vector_colors,
+ resampling_frequency,
+ vector_line_shapes,
+ theme,
+ )
+ else:
+ raise PreventUpdate
+
+ # Plotting per derived vectors accessor
+ for ensemble, accessor in derived_vectors_accessors.items():
+ # TODO: Consider to remove list and use pd.concat to obtain one single
+ # dataframe with vector columns. NB: Assumes equal sampling rate
+ # for all vectors - i.e equal number of rows in dataframes
+
+ # Retrive vectors data from accessor
+ vectors_df_list: List[pd.DataFrame] = []
+ if accessor.has_provider_vectors():
+ vectors_df_list.append(accessor.get_provider_vectors_df())
+ if accessor.has_interval_and_average_vectors():
+ vectors_df_list.append(
+ accessor.create_interval_and_average_vectors_df()
+ )
+ if accessor.has_vector_calculator_expressions():
+ vectors_df_list.append(accessor.create_calculated_vectors_df())
+
+ for vectors_df in vectors_df_list:
+ if visualization == VisualizationOptions.REALIZATIONS:
+ figure_builder.add_realizations_traces(
+ vectors_df,
+ ensemble,
+ )
+ if visualization == VisualizationOptions.STATISTICS:
+ vectors_statistics_df = create_vectors_statistics_df(vectors_df)
+ figure_builder.add_statistics_traces(
+ vectors_statistics_df,
+ ensemble,
+ statistics_options,
+ )
+ if visualization == VisualizationOptions.FANCHART:
+ vectors_statistics_df = create_vectors_statistics_df(vectors_df)
+ figure_builder.add_fanchart_traces(
+ vectors_statistics_df,
+ ensemble,
+ fanchart_options,
+ )
+
+ # Retrieve selected input providers
+ selected_input_providers = ProviderSet(
+ {
+ name: provider
+ for name, provider in input_provider_set.items()
+ if name in selected_ensembles
+ }
+ )
+
+ # Do not add observations if only delta ensembles are selected
+ is_only_delta_ensembles = (
+ len(selected_input_providers.names()) == 0
+ and len(derived_vectors_accessors) > 0
+ )
+ if (
+ observations
+ and TraceOptions.OBSERVATIONS in trace_options
+ and not is_only_delta_ensembles
+ ):
+ for vector in vectors:
+ vector_observations = observations.get(vector)
+ if vector_observations:
+ figure_builder.add_vector_observations(vector, vector_observations)
+
+ # Add history trace
+ if TraceOptions.HISTORY in trace_options:
+ if (
+ isinstance(figure_builder, VectorSubplotBuilder)
+ and len(selected_input_providers.names()) > 0
+ ):
+ # Add history trace using first selected ensemble
+ name = selected_input_providers.names()[0]
+ provider = selected_input_providers.provider(name)
+ vector_names = provider.vector_names()
+
+ provider_vectors = [elm for elm in vectors if elm in vector_names]
+ if provider_vectors:
+ history_vectors_df = create_history_vectors_df(
+ provider, provider_vectors, resampling_frequency
+ )
+ # TODO: Handle check of non-empty dataframe better?
+ if (
+ not history_vectors_df.empty
+ and "DATE" in history_vectors_df.columns
+ ):
+ figure_builder.add_history_traces(history_vectors_df)
+
+ if isinstance(figure_builder, EnsembleSubplotBuilder):
+ # Add history trace for each ensemble
+ for name, provider in selected_input_providers.items():
+ vector_names = provider.vector_names()
+
+ provider_vectors = [elm for elm in vectors if elm in vector_names]
+ if provider_vectors:
+ history_vectors_df = create_history_vectors_df(
+ provider, provider_vectors, resampling_frequency
+ )
+ # TODO: Handle check of non-empty dataframe better?
+ if (
+ not history_vectors_df.empty
+ and "DATE" in history_vectors_df.columns
+ ):
+ figure_builder.add_history_traces(
+ history_vectors_df,
+ name,
+ )
+
+ # Create legends when all data is added to figure
+ figure_builder.create_graph_legends()
+
+ return figure_builder.get_serialized_figure()
+
+ @app.callback(
+ get_data_output,
+ [get_data_requested],
+ [
+ State(
+ get_uuid(LayoutElements.VECTOR_SELECTOR),
+ "selectedNodes",
+ ),
+ State(get_uuid(LayoutElements.ENSEMBLES_DROPDOWN), "value"),
+ State(
+ get_uuid(LayoutElements.VISUALIZATION_RADIO_ITEMS),
+ "value",
+ ),
+ State(
+ get_uuid(LayoutElements.RESAMPLING_FREQUENCY_DROPDOWN),
+ "value",
+ ),
+ State(
+ get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLES),
+ "data",
+ ),
+ State(
+ get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS),
+ "data",
+ ),
+ ],
+ )
+ def _user_download_data(
+ data_requested: Union[int, None],
+ vectors: List[str],
+ selected_ensembles: List[str],
+ visualization_value: str,
+ resampling_frequency_value: str,
+ delta_ensembles: List[DeltaEnsemble],
+ vector_calculator_expressions: List[ExpressionInfo],
+ ) -> Union[EncodedFile, str]:
+ """Callback to download data based on selections
+
+ NOTE:
+ * No history vector
+ * No filtering on statistics selections
+ * No observation data
+ """
+ if data_requested is None:
+ raise PreventUpdate
+
+ if vectors is None:
+ vectors = initial_selected_vectors
+
+ # Retrieve the selected expressions
+ selected_expressions = get_selected_expressions(
+ vector_calculator_expressions, vectors
+ )
+
+ # Convert from string values to enum types
+ visualization = VisualizationOptions(visualization_value)
+ resampling_frequency = Frequency.from_string_value(resampling_frequency_value)
+
+ if not isinstance(selected_ensembles, list):
+ raise TypeError("ensembles should always be of type list")
+
+ # Create dict of derived vectors accessors for selected ensembles
+ derived_vectors_accessors: Dict[
+ str, DerivedVectorsAccessor
+ ] = create_derived_vectors_accessor_dict(
+ ensembles=selected_ensembles,
+ vectors=vectors,
+ provider_set=input_provider_set,
+ expressions=selected_expressions,
+ delta_ensembles=delta_ensembles,
+ resampling_frequency=resampling_frequency,
+ )
+
+ # Dict with vector name as key and dataframe data as value
+ vector_dataframe_dict: Dict[str, pd.DataFrame] = {}
+
+ # Plotting per derived vectors accessor
+ for ensemble, accessor in derived_vectors_accessors.items():
+ # Retrive vectors data from accessor
+ vectors_df_list: List[pd.DataFrame] = []
+ if accessor.has_provider_vectors():
+ vectors_df_list.append(accessor.get_provider_vectors_df())
+ if accessor.has_interval_and_average_vectors():
+ vectors_df_list.append(
+ accessor.create_interval_and_average_vectors_df()
+ )
+ if accessor.has_vector_calculator_expressions():
+ vectors_df_list.append(accessor.create_calculated_vectors_df())
+
+ # Append data for each vector
+ for vectors_df in vectors_df_list:
+ vector_names = [
+ elm for elm in vectors_df.columns if elm not in ["DATE", "REAL"]
+ ]
+ for vector in vector_names:
+ if visualization == VisualizationOptions.REALIZATIONS:
+ vector_df = vectors_df[["DATE", "REAL", vector]]
+ row_count = vector_df.shape[0]
+ ensemble_name_list = [ensemble] * row_count
+ vector_df.insert(
+ loc=0, column="ENSEMBLE", value=ensemble_name_list
+ )
+ if vector_dataframe_dict.get(vector) is None:
+ vector_dataframe_dict[vector] = vector_df
+ else:
+ vector_dataframe_dict[vector] = pd.concat(
+ [vector_dataframe_dict[vector], vector_df],
+ ignore_index=True,
+ axis=0,
+ )
+
+ if visualization in [
+ VisualizationOptions.STATISTICS,
+ VisualizationOptions.FANCHART,
+ ]:
+ vectors_statistics_df = create_vectors_statistics_df(vectors_df)
+ vector_statistics_df = vectors_statistics_df[["DATE", vector]]
+ row_count = vector_statistics_df.shape[0]
+ ensemble_name_list = [ensemble] * row_count
+ vector_statistics_df.insert(
+ loc=0, column="ENSEMBLE", value=ensemble_name_list
+ )
+ if vector_dataframe_dict.get(vector) is None:
+ vector_dataframe_dict[vector] = vector_statistics_df
+ else:
+ vector_dataframe_dict[vector] = pd.concat(
+ [vector_dataframe_dict[vector], vector_statistics_df],
+ ignore_index=True,
+ axis=0,
+ )
+
+ # : is replaced with _ in filenames to stay within POSIX portable pathnames
+ # (e.g. : is not valid in a Windows path)
+ return WebvizPluginABC.plugin_data_compress(
+ [
+ {
+ "filename": f"{vector.replace(':', '_')}.csv",
+ "content": df.to_csv(index=False),
+ }
+ for vector, df in vector_dataframe_dict.items()
+ ]
+ )
+
+ @app.callback(
+ [
+ Output(
+ get_uuid(LayoutElements.PLOT_STATISTICS_OPTIONS_CHECKLIST),
+ "style",
+ ),
+ Output(
+ get_uuid(LayoutElements.PLOT_FANCHART_OPTIONS_CHECKLIST),
+ "style",
+ ),
+ ],
+ [
+ Input(
+ get_uuid(LayoutElements.VISUALIZATION_RADIO_ITEMS),
+ "value",
+ )
+ ],
+ )
+ def _update_statistics_options_layout(visualization: str) -> List[dict]:
+ """Only show statistics checklist if in statistics mode"""
+
+ # Convert to enum type
+ visualization = VisualizationOptions(visualization)
+
+ def get_style(visualization_type: VisualizationOptions) -> dict:
+ return (
+ {"display": "block"}
+ if visualization == visualization_type
+ else {"display": "none"}
+ )
+
+ statistics_options_style = get_style(VisualizationOptions.STATISTICS)
+ fanchart_options_style = get_style(VisualizationOptions.FANCHART)
+
+ return [statistics_options_style, fanchart_options_style]
+
+ @app.callback(
+ [
+ Output(
+ get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLES),
+ "data",
+ ),
+ Output(
+ get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLE_NAMES_TABLE),
+ "data",
+ ),
+ Output(
+ get_uuid(LayoutElements.ENSEMBLES_DROPDOWN),
+ "options",
+ ),
+ ],
+ [
+ Input(
+ get_uuid(LayoutElements.DELTA_ENSEMBLE_CREATE_BUTTON),
+ "n_clicks",
+ )
+ ],
+ [
+ State(
+ get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLES),
+ "data",
+ ),
+ State(
+ get_uuid(LayoutElements.DELTA_ENSEMBLE_A_DROPDOWN),
+ "value",
+ ),
+ State(
+ get_uuid(LayoutElements.DELTA_ENSEMBLE_B_DROPDOWN),
+ "value",
+ ),
+ ],
+ )
+ def _update_created_delta_ensembles_names(
+ n_clicks: int,
+ existing_delta_ensembles: List[DeltaEnsemble],
+ ensemble_a: str,
+ ensemble_b: str,
+ ) -> Tuple[List[DeltaEnsemble], List[Dict[str, str]], List[Dict[str, str]]]:
+ if n_clicks is None or n_clicks <= 0:
+ raise PreventUpdate
+
+ delta_ensemble = DeltaEnsemble(ensemble_a=ensemble_a, ensemble_b=ensemble_b)
+ if delta_ensemble in existing_delta_ensembles:
+ raise PreventUpdate
+
+ new_delta_ensembles = existing_delta_ensembles
+ new_delta_ensembles.append(delta_ensemble)
+
+ # Create delta ensemble names
+ new_delta_ensemble_names = create_delta_ensemble_names(new_delta_ensembles)
+
+ table_data = _create_delta_ensemble_table_column_data(
+ get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLE_NAMES_TABLE_COLUMN),
+ new_delta_ensemble_names,
+ )
+
+ ensemble_options = [
+ {"label": ensemble, "value": ensemble}
+ for ensemble in input_provider_set.names()
+ ]
+ for elm in new_delta_ensemble_names:
+ ensemble_options.append({"label": elm, "value": elm})
+
+ return (new_delta_ensembles, table_data, ensemble_options)
+
+ @app.callback(
+ Output(get_uuid(LayoutElements.VECTOR_CALCULATOR_MODAL), "is_open"),
+ [
+ Input(get_uuid(LayoutElements.VECTOR_CALCULATOR_OPEN_BUTTON), "n_clicks"),
+ ],
+ [State(get_uuid(LayoutElements.VECTOR_CALCULATOR_MODAL), "is_open")],
+ )
+ def _toggle_vector_calculator_modal(n_open_clicks: int, is_open: bool) -> bool:
+ if n_open_clicks:
+ return not is_open
+ raise PreventUpdate
+
+ @app.callback(
+ Output(get_uuid(LayoutElements.VECTOR_CALCULATOR), "externalParseData"),
+ Input(get_uuid(LayoutElements.VECTOR_CALCULATOR), "externalParseExpression"),
+ )
+ def _parse_vector_calculator_expression(
+ expression: ExpressionInfo,
+ ) -> ExternalParseData:
+ if expression is None:
+ raise PreventUpdate
+ return wsc.VectorCalculator.external_parse_data(expression)
+
+ @app.callback(
+ [
+ Output(
+ get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS),
+ "data",
+ ),
+ Output(get_uuid(LayoutElements.VECTOR_SELECTOR), "data"),
+ Output(get_uuid(LayoutElements.VECTOR_SELECTOR), "selectedTags"),
+ Output(get_uuid(LayoutElements.VECTOR_SELECTOR), "customVectorDefinitions"),
+ Output(
+ get_uuid(LayoutElements.GRAPH_DATA_HAS_CHANGED_TRIGGER),
+ "data",
+ ),
+ ],
+ Input(get_uuid(LayoutElements.VECTOR_CALCULATOR_MODAL), "is_open"),
+ [
+ State(
+ get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS_OPEN_MODAL),
+ "data",
+ ),
+ State(
+ get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS),
+ "data",
+ ),
+ State(get_uuid(LayoutElements.VECTOR_SELECTOR), "selectedNodes"),
+ State(get_uuid(LayoutElements.VECTOR_SELECTOR), "customVectorDefinitions"),
+ State(
+ get_uuid(LayoutElements.GRAPH_DATA_HAS_CHANGED_TRIGGER),
+ "data",
+ ),
+ ],
+ )
+ def _update_vector_calculator_expressions_on_modal_close(
+ is_modal_open: bool,
+ new_expressions: List[ExpressionInfo],
+ current_expressions: List[ExpressionInfo],
+ current_selected_vectors: List[str],
+ current_custom_vector_definitions: dict,
+ graph_data_has_changed_counter: int,
+ ) -> list:
+ """Update vector calculator expressions, propagate expressions to VectorSelectors,
+ update current selections and trigger re-rendering of graphing if necessary
+ """
+ if is_modal_open or (new_expressions == current_expressions):
+ raise PreventUpdate
+
+ # Create current selected expressions for comparison - Deep copy!
+ current_selected_expressions = get_selected_expressions(
+ current_expressions, current_selected_vectors
+ )
+
+ # Create new vector selector data - Deep copy!
+ new_vector_selector_data = copy.deepcopy(vector_selector_base_data)
+ add_expressions_to_vector_selector_data(
+ new_vector_selector_data, new_expressions
+ )
+
+ # Create new selected vectors - from new expressions
+ new_selected_vectors = _create_new_selected_vectors(
+ current_selected_vectors,
+ current_expressions,
+ new_expressions,
+ new_vector_selector_data,
+ )
+
+ # Get new selected expressions
+ new_selected_expressions = get_selected_expressions(
+ new_expressions, new_selected_vectors
+ )
+
+ # Get new custom vector definitions
+ new_custom_vector_definitions = get_custom_vector_definitions_from_expressions(
+ new_expressions
+ )
+
+ # Prevent updates if unchanged
+ if new_custom_vector_definitions == current_custom_vector_definitions:
+ new_custom_vector_definitions = dash.no_update
+
+ if new_selected_vectors == current_selected_vectors:
+ new_selected_vectors = dash.no_update
+
+ # If selected expressions are edited - Only trigger graph data update property when needed,
+ # i.e. names are unchanged and selectedNodes for VectorSelector remains unchanged.
+ new_graph_data_has_changed_counter = dash.no_update
+ if (
+ new_selected_expressions != current_selected_expressions
+ and new_selected_vectors == dash.no_update
+ ):
+ new_graph_data_has_changed_counter = graph_data_has_changed_counter + 1
+
+ return [
+ new_expressions,
+ new_vector_selector_data,
+ new_selected_vectors,
+ new_custom_vector_definitions,
+ new_graph_data_has_changed_counter,
+ ]
+
+ @app.callback(
+ Output(
+ get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS_OPEN_MODAL),
+ "data",
+ ),
+ Input(get_uuid(LayoutElements.VECTOR_CALCULATOR), "expressions"),
+ )
+ def _update_vector_calculator_expressions_when_modal_open(
+ expressions: List[ExpressionInfo],
+ ) -> list:
+ new_expressions: List[ExpressionInfo] = [
+ elm for elm in expressions if elm["isValid"]
+ ]
+ return new_expressions
+
+
+def _create_delta_ensemble_table_column_data(
+ column_name: str, ensemble_names: List[str]
+) -> List[Dict[str, str]]:
+ return [{column_name: elm} for elm in ensemble_names]
+
+
+def _create_new_selected_vectors(
+ existing_selected_vectors: List[str],
+ existing_expressions: List[ExpressionInfo],
+ new_expressions: List[ExpressionInfo],
+ new_vector_selector_data: list,
+) -> List[str]:
+ valid_selections: List[str] = []
+ for vector in existing_selected_vectors:
+ new_vector: Optional[str] = vector
+
+ # Get id if vector is among existing expressions
+ dropdown_id = next(
+ (elm["id"] for elm in existing_expressions if elm["name"] == vector),
+ None,
+ )
+ # Find id among new expressions to get new/edited name
+ if dropdown_id:
+ new_vector = next(
+ (elm["name"] for elm in new_expressions if elm["id"] == dropdown_id),
+ None,
+ )
+
+ # Append if vector name exist among data
+ if new_vector is not None and is_vector_name_in_vector_selector_data(
+ new_vector, new_vector_selector_data
+ ):
+ valid_selections.append(new_vector)
+ return valid_selections
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_layout.py b/webviz_subsurface/plugins/_simulation_time_series/_layout.py
new file mode 100644
index 000000000..7d83d61d9
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_layout.py
@@ -0,0 +1,463 @@
+from typing import Callable, List, Optional
+
+import dash_bootstrap_components as dbc
+import webviz_core_components as wcc
+import webviz_subsurface_components as wsc
+from dash import dash_table, dcc, html
+from webviz_subsurface_components import ExpressionInfo
+
+from webviz_subsurface._providers import Frequency
+from webviz_subsurface._utils.vector_calculator import (
+ get_custom_vector_definitions_from_expressions,
+)
+
+from .types import (
+ FanchartOptions,
+ StatisticsOptions,
+ SubplotGroupByOptions,
+ TraceOptions,
+ VisualizationOptions,
+)
+
+
+# pylint: disable=too-few-public-methods
+class LayoutElements:
+ """
+ Definition of names of HTML-elements in layout
+
+ TODO: Consider ids as in AiO convention https://dash.plotly.com/all-in-one-components
+ """
+
+ GRAPH = "graph"
+ GRAPH_DATA_HAS_CHANGED_TRIGGER = (
+ "graph_data_has_changed_trigger" # NOTE: To force re-render of graph
+ )
+
+ ENSEMBLES_DROPDOWN = "ensembles_dropdown"
+ VECTOR_SELECTOR = "vector_selector"
+
+ VECTOR_CALCULATOR = "vector_calculator"
+ VECTOR_CALCULATOR_MODAL = "vector_calculator_modal"
+ VECTOR_CALCULATOR_OPEN_BUTTON = "vector_calculator_open_button"
+ VECTOR_CALCULATOR_EXPRESSIONS = "vector_calculator_expressions"
+ VECTOR_CALCULATOR_EXPRESSIONS_OPEN_MODAL = (
+ "vector_calculator_expressions_open_modal"
+ )
+
+ DELTA_ENSEMBLE_A_DROPDOWN = "delta_ensemble_A_dropdown"
+ DELTA_ENSEMBLE_B_DROPDOWN = "delta_ensemble_B_dropdown"
+ DELTA_ENSEMBLE_CREATE_BUTTON = "delta_ensemble_create_button"
+ CREATED_DELTA_ENSEMBLES = "created_delta_ensemble_names"
+ CREATED_DELTA_ENSEMBLE_NAMES_TABLE = "created_delta_ensemble_names_table"
+ CREATED_DELTA_ENSEMBLE_NAMES_TABLE_COLUMN = (
+ "created_delta_ensemble_names_table_column"
+ )
+
+ VISUALIZATION_RADIO_ITEMS = "visualization_radio_items"
+
+ PLOT_FANCHART_OPTIONS_CHECKLIST = "plot_fanchart_options_checklist"
+ PLOT_STATISTICS_OPTIONS_CHECKLIST = "plot_statistics_options_checklist"
+ PLOT_TRACE_OPTIONS_CHECKLIST = "plot_trace_options_checklist"
+
+ SUBPLOT_OWNER_OPTIONS_RADIO_ITEMS = "subplot_owner_options_radio_items"
+
+ RESAMPLING_FREQUENCY_DROPDOWN = "resampling_frequency_dropdown"
+
+ TOUR_STEP_MAIN_LAYOUT = "tour_step_main_layout"
+ TOUR_STEP_SETTINGS_LAYOUT = "tour_step_settings_layout"
+ TOUR_STEP_GROUP_BY = "tour_step_group_by"
+ TOUR_STEP_DELTA_ENSEMBLE = "tour_step_delta_ensemble"
+ TOUR_STEP_VISUALIZATION = "tour_step_visualization"
+ TOUR_STEP_OPTIONS = "tour_step_options"
+
+
+# pylint: disable = too-many-arguments
+def main_layout(
+ get_uuid: Callable,
+ ensemble_names: List[str],
+ vector_selector_data: list,
+ vector_calculator_data: list,
+ predefined_expressions: List[ExpressionInfo],
+ disable_resampling_dropdown: bool,
+ selected_resampling_frequency: Frequency,
+ selected_visualization: VisualizationOptions,
+ selected_vectors: Optional[List[str]] = None,
+) -> html.Div:
+ return wcc.FlexBox(
+ id=get_uuid(LayoutElements.TOUR_STEP_MAIN_LAYOUT),
+ children=[
+ # Settings layout
+ wcc.FlexColumn(
+ id=get_uuid(LayoutElements.TOUR_STEP_SETTINGS_LAYOUT),
+ children=wcc.Frame(
+ style={"height": "90vh"},
+ children=__settings_layout(
+ get_uuid=get_uuid,
+ ensembles=ensemble_names,
+ vector_selector_data=vector_selector_data,
+ vector_calculator_data=vector_calculator_data,
+ predefined_expressions=predefined_expressions,
+ disable_resampling_dropdown=disable_resampling_dropdown,
+ selected_resampling_frequency=selected_resampling_frequency,
+ selected_visualization=selected_visualization,
+ selected_vectors=selected_vectors,
+ ),
+ ),
+ ),
+ # Graph layout
+ wcc.FlexColumn(
+ flex=4,
+ children=[
+ wcc.Frame(
+ style={"height": "90vh"},
+ highlight=False,
+ color="white",
+ children=[
+ wcc.Graph(
+ style={"height": "85vh"},
+ id=get_uuid(LayoutElements.GRAPH),
+ ),
+ dcc.Store(
+ # NOTE:Used to trigger graph update callback if data has
+ # changed, i.e. no change of regular INPUT html-elements
+ id=get_uuid(
+ LayoutElements.GRAPH_DATA_HAS_CHANGED_TRIGGER
+ ),
+ data=0,
+ ),
+ ],
+ )
+ ],
+ ),
+ ],
+ )
+
+
+# pylint: disable = too-many-arguments
+def __settings_layout(
+ get_uuid: Callable,
+ ensembles: List[str],
+ vector_selector_data: list,
+ vector_calculator_data: list,
+ predefined_expressions: List[ExpressionInfo],
+ disable_resampling_dropdown: bool,
+ selected_resampling_frequency: Frequency,
+ selected_visualization: VisualizationOptions,
+ selected_vectors: Optional[List[str]] = None,
+) -> html.Div:
+ return html.Div(
+ children=[
+ wcc.Selectors(
+ label="Group By",
+ id=get_uuid(LayoutElements.TOUR_STEP_GROUP_BY),
+ children=[
+ wcc.RadioItems(
+ id=get_uuid(LayoutElements.SUBPLOT_OWNER_OPTIONS_RADIO_ITEMS),
+ options=[
+ {
+ "label": "Time Series",
+ "value": SubplotGroupByOptions.VECTOR.value,
+ },
+ {
+ "label": "Ensemble",
+ "value": SubplotGroupByOptions.ENSEMBLE.value,
+ },
+ ],
+ value=SubplotGroupByOptions.VECTOR.value,
+ ),
+ ],
+ ),
+ wcc.Selectors(
+ label="Resampling frequency",
+ children=[
+ wcc.Dropdown(
+ id=get_uuid(LayoutElements.RESAMPLING_FREQUENCY_DROPDOWN),
+ clearable=False,
+ disabled=disable_resampling_dropdown,
+ options=[
+ {
+ "label": frequency.value,
+ "value": frequency.value,
+ }
+ for frequency in Frequency
+ ],
+ value=selected_resampling_frequency,
+ ),
+ wcc.Label(
+ "NB: Disabled for presampled data",
+ style={"font-style": "italic"}
+ if disable_resampling_dropdown
+ else {"display": "none"},
+ ),
+ ],
+ ),
+ wcc.Selectors(
+ label="Ensembles",
+ children=[
+ wcc.Dropdown(
+ label="Selected ensembles",
+ id=get_uuid(LayoutElements.ENSEMBLES_DROPDOWN),
+ clearable=False,
+ multi=True,
+ options=[
+ {"label": ensemble, "value": ensemble}
+ for ensemble in ensembles
+ ],
+ value=None if len(ensembles) <= 0 else [ensembles[0]],
+ ),
+ wcc.Selectors(
+ label="Delta Ensembles",
+ id=get_uuid(LayoutElements.TOUR_STEP_DELTA_ENSEMBLE),
+ children=[
+ __delta_ensemble_creator_layout(
+ get_uuid=get_uuid,
+ ensembles=ensembles,
+ )
+ ],
+ ),
+ ],
+ ),
+ wcc.Selectors(
+ label="Time Series",
+ children=[
+ wsc.VectorSelector(
+ id=get_uuid(LayoutElements.VECTOR_SELECTOR),
+ maxNumSelectedNodes=3,
+ data=vector_selector_data,
+ persistence=True,
+ persistence_type="session",
+ selectedTags=[]
+ if selected_vectors is None
+ else selected_vectors,
+ numSecondsUntilSuggestionsAreShown=0.5,
+ lineBreakAfterTag=True,
+ customVectorDefinitions=get_custom_vector_definitions_from_expressions(
+ predefined_expressions
+ ),
+ ),
+ html.Button(
+ "Vector Calculator",
+ id=get_uuid(LayoutElements.VECTOR_CALCULATOR_OPEN_BUTTON),
+ style={
+ "margin-top": "5px",
+ "margin-bottom": "5px",
+ },
+ ),
+ ],
+ ),
+ wcc.Selectors(
+ label="Visualization",
+ id=get_uuid(LayoutElements.TOUR_STEP_VISUALIZATION),
+ children=[
+ wcc.RadioItems(
+ id=get_uuid(LayoutElements.VISUALIZATION_RADIO_ITEMS),
+ options=[
+ {
+ "label": "Individual realizations",
+ "value": VisualizationOptions.REALIZATIONS.value,
+ },
+ {
+ "label": "Statistical lines",
+ "value": VisualizationOptions.STATISTICS.value,
+ },
+ {
+ "label": "Statistical fanchart",
+ "value": VisualizationOptions.FANCHART.value,
+ },
+ ],
+ value=selected_visualization.value,
+ ),
+ ],
+ ),
+ wcc.Selectors(
+ label="Options",
+ id=get_uuid(LayoutElements.TOUR_STEP_OPTIONS),
+ children=__plot_options_layout(
+ get_uuid=get_uuid,
+ selected_visualization=selected_visualization,
+ ),
+ ),
+ __vector_calculator_modal_layout(
+ get_uuid=get_uuid,
+ vector_data=vector_calculator_data,
+ predefined_expressions=predefined_expressions,
+ ),
+ dcc.Store(
+ id=get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS),
+ data=predefined_expressions,
+ ),
+ dcc.Store(
+ id=get_uuid(LayoutElements.VECTOR_CALCULATOR_EXPRESSIONS_OPEN_MODAL),
+ data=predefined_expressions,
+ ),
+ ]
+ )
+
+
+def __vector_calculator_modal_layout(
+ get_uuid: Callable,
+ vector_data: list,
+ predefined_expressions: List[ExpressionInfo],
+) -> dbc.Modal:
+ return dbc.Modal(
+ style={"marginTop": "20vh", "width": "1300px"},
+ children=[
+ dbc.ModalHeader("Vector Calculator"),
+ dbc.ModalBody(
+ html.Div(
+ children=[
+ wsc.VectorCalculator(
+ id=get_uuid(LayoutElements.VECTOR_CALCULATOR),
+ vectors=vector_data,
+ expressions=predefined_expressions,
+ )
+ ],
+ ),
+ ),
+ ],
+ id=get_uuid(LayoutElements.VECTOR_CALCULATOR_MODAL),
+ size="lg",
+ centered=True,
+ )
+
+
+def __delta_ensemble_creator_layout(
+ get_uuid: Callable, ensembles: List[str]
+) -> html.Div:
+ return html.Div(
+ children=[
+ wcc.Dropdown(
+ label="Ensemble A",
+ id=get_uuid(LayoutElements.DELTA_ENSEMBLE_A_DROPDOWN),
+ clearable=False,
+ options=[{"label": i, "value": i} for i in ensembles],
+ value=ensembles[0],
+ style={"min-width": "60px"},
+ ),
+ wcc.Dropdown(
+ label="Ensemble B",
+ id=get_uuid(LayoutElements.DELTA_ENSEMBLE_B_DROPDOWN),
+ clearable=False,
+ options=[{"label": i, "value": i} for i in ensembles],
+ value=ensembles[-1],
+ style={"min-width": "60px"},
+ ),
+ html.Button(
+ "Create",
+ id=get_uuid(LayoutElements.DELTA_ENSEMBLE_CREATE_BUTTON),
+ n_clicks=0,
+ style={
+ "margin-top": "5px",
+ "margin-bottom": "5px",
+ "min-width": "20px",
+ },
+ ),
+ __delta_ensemble_table_layout(get_uuid),
+ dcc.Store(
+ id=get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLES),
+ data=[],
+ ), # TODO: Add predefined deltas?
+ ]
+ )
+
+
+def __delta_ensemble_table_layout(get_uuid: Callable) -> dash_table.DataTable:
+ return dash_table.DataTable(
+ id=get_uuid(LayoutElements.CREATED_DELTA_ENSEMBLE_NAMES_TABLE),
+ columns=(
+ [
+ {
+ "id": get_uuid(
+ LayoutElements.CREATED_DELTA_ENSEMBLE_NAMES_TABLE_COLUMN
+ ),
+ "name": "Created Delta (A-B)",
+ }
+ ]
+ ),
+ data=[],
+ fixed_rows={"headers": True},
+ style_as_list_view=True,
+ style_cell={"textAlign": "left"},
+ style_header={"fontWeight": "bold"},
+ style_table={
+ "maxHeight": "150px",
+ "overflowY": "auto",
+ },
+ editable=False,
+ )
+
+
+def __plot_options_layout(
+ get_uuid: Callable,
+ selected_visualization: VisualizationOptions,
+) -> html.Div:
+ return (
+ html.Div(
+ children=[
+ wcc.Checklist(
+ id=get_uuid(LayoutElements.PLOT_TRACE_OPTIONS_CHECKLIST),
+ options=[
+ {"label": "History", "value": TraceOptions.HISTORY.value},
+ {
+ "label": "Observation",
+ "value": TraceOptions.OBSERVATIONS.value,
+ },
+ ],
+ value=[TraceOptions.HISTORY.value, TraceOptions.OBSERVATIONS.value],
+ ),
+ wcc.Checklist(
+ id=get_uuid(LayoutElements.PLOT_STATISTICS_OPTIONS_CHECKLIST),
+ style={"display": "block"}
+ if VisualizationOptions.STATISTICS == selected_visualization
+ else {"display": "none"},
+ options=[
+ {"label": "Mean", "value": StatisticsOptions.MEAN.value},
+ {
+ "label": "P10 (high)",
+ "value": StatisticsOptions.P10.value,
+ },
+ {
+ "label": "P50 (median)",
+ "value": StatisticsOptions.P50.value,
+ },
+ {
+ "label": "P90 (low)",
+ "value": StatisticsOptions.P90.value,
+ },
+ {"label": "Maximum", "value": StatisticsOptions.MAX.value},
+ {"label": "Minimum", "value": StatisticsOptions.MIN.value},
+ ],
+ value=[
+ StatisticsOptions.MEAN.value,
+ StatisticsOptions.P10.value,
+ StatisticsOptions.P90.value,
+ ],
+ ),
+ wcc.Checklist(
+ id=get_uuid(LayoutElements.PLOT_FANCHART_OPTIONS_CHECKLIST),
+ style={"display": "block"}
+ if VisualizationOptions.FANCHART == selected_visualization
+ else {"display": "none"},
+ options=[
+ {
+ "label": FanchartOptions.MEAN.value,
+ "value": FanchartOptions.MEAN.value,
+ },
+ {
+ "label": FanchartOptions.P10_P90.value,
+ "value": FanchartOptions.P10_P90.value,
+ },
+ {
+ "label": FanchartOptions.MIN_MAX.value,
+ "value": FanchartOptions.MIN_MAX.value,
+ },
+ ],
+ value=[
+ FanchartOptions.MEAN.value,
+ FanchartOptions.P10_P90.value,
+ FanchartOptions.MIN_MAX.value,
+ ],
+ ),
+ ],
+ ),
+ )
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_plugin.py b/webviz_subsurface/plugins/_simulation_time_series/_plugin.py
new file mode 100644
index 000000000..176504d39
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_plugin.py
@@ -0,0 +1,336 @@
+import copy
+import warnings
+from pathlib import Path
+from typing import Callable, Dict, List, Optional, Tuple
+
+import dash
+import webviz_core_components as wcc
+from webviz_config import WebvizPluginABC, WebvizSettings
+from webviz_config.webviz_assets import WEBVIZ_ASSETS
+
+import webviz_subsurface
+from webviz_subsurface._abbreviations.reservoir_simulation import (
+ historical_vector,
+ simulation_vector_description,
+)
+from webviz_subsurface._providers import Frequency
+from webviz_subsurface._utils.simulation_timeseries import (
+ check_and_format_observations,
+ set_simulation_line_shape_fallback,
+)
+from webviz_subsurface._utils.vector_calculator import (
+ add_expressions_to_vector_selector_data,
+ expressions_from_config,
+ validate_predefined_expression,
+)
+from webviz_subsurface._utils.vector_selector import add_vector_to_vector_selector_data
+from webviz_subsurface._utils.webvizstore_functions import get_path
+
+from ._callbacks import plugin_callbacks
+from ._layout import LayoutElements, main_layout
+from .types import VisualizationOptions
+from .types.provider_set import (
+ create_lazy_provider_set_from_paths,
+ create_presampled_provider_set_from_paths,
+)
+from .utils.from_timeseries_cumulatives import rename_vector_from_cumulative
+
+
+class SimulationTimeSeries(WebvizPluginABC):
+ # pylint: disable=too-many-arguments,too-many-locals,too-many-statements
+ def __init__(
+ self,
+ app: dash.Dash,
+ webviz_settings: WebvizSettings,
+ ensembles: Optional[list] = None,
+ rel_file_pattern: str = "share/results/unsmry/*.arrow",
+ perform_presampling: bool = False,
+ obsfile: Path = None,
+ options: dict = None,
+ sampling: str = Frequency.MONTHLY.value,
+ predefined_expressions: str = None,
+ line_shape_fallback: str = "linear",
+ ) -> None:
+ super().__init__()
+
+ # NOTE: Temporary css, pending on new wcc modal component.
+ # See: https://github.com/equinor/webviz-core-components/issues/163
+ WEBVIZ_ASSETS.add(
+ Path(webviz_subsurface.__file__).parent / "_assets" / "css" / "modal.css"
+ )
+
+ self._webviz_settings = webviz_settings
+ self._obsfile = obsfile
+
+ self._line_shape_fallback = set_simulation_line_shape_fallback(
+ line_shape_fallback
+ )
+
+ # Must define valid freqency!
+ if Frequency.from_string_value(sampling) is None:
+ raise ValueError(
+ 'Sampling frequency conversion is "None", i.e. Raw sampling, and '
+ "is not supported by plugin yet!"
+ )
+ self._sampling = Frequency(sampling)
+ self._presampled_frequency = None
+
+ # TODO: Update functionality when allowing raw data and csv file input
+ # NOTE: If csv is implemented-> handle/disable statistics, INTVL_, AVG_, delta
+ # ensemble, etc.
+ if ensembles is not None:
+ ensemble_paths: Dict[str, Path] = {
+ ensemble_name: webviz_settings.shared_settings["scratch_ensembles"][
+ ensemble_name
+ ]
+ for ensemble_name in ensembles
+ }
+ if perform_presampling:
+ self._presampled_frequency = self._sampling
+ self._input_provider_set = create_presampled_provider_set_from_paths(
+ ensemble_paths, rel_file_pattern, self._presampled_frequency
+ )
+ else:
+ self._input_provider_set = create_lazy_provider_set_from_paths(
+ ensemble_paths, rel_file_pattern
+ )
+ else:
+ raise ValueError('Incorrect argument, must provide "ensembles"')
+
+ if not self._input_provider_set:
+ raise ValueError(
+ "Initial provider set is undefined, and ensemble summary providers"
+ " are not instanciated for plugin"
+ )
+
+ self._theme = webviz_settings.theme
+
+ self._observations = {}
+ if self._obsfile:
+ self._observations = check_and_format_observations(get_path(self._obsfile))
+
+ # NOTE: Initially keep set of all vector names - can make dynamic if wanted?
+ vector_names = self._input_provider_set.all_vector_names()
+ non_historical_vector_names = [
+ vector
+ for vector in vector_names
+ if historical_vector(vector, None, False) not in vector_names
+ ]
+
+ # NOTE: Initially: With set of vector names, the vector selector data is static
+ # Can be made dynamic based on selected ensembles - i.e. vectors present among
+ # selected providers?
+ self._vector_selector_base_data: list = []
+ self._vector_calculator_data: list = []
+ for vector in non_historical_vector_names:
+ split = vector.split(":")
+ add_vector_to_vector_selector_data(
+ self._vector_selector_base_data,
+ vector,
+ simulation_vector_description(split[0]),
+ )
+ add_vector_to_vector_selector_data(
+ self._vector_calculator_data,
+ vector,
+ simulation_vector_description(split[0]),
+ )
+
+ metadata = (
+ self._input_provider_set.vector_metadata(vector)
+ if self._input_provider_set
+ else None
+ )
+ if metadata and metadata.is_total:
+ # Get the likely name for equivalent rate vector and make dropdown options.
+ # Requires that the time_index was either defined or possible to infer.
+ avgrate_vec = rename_vector_from_cumulative(vector=vector, as_rate=True)
+ interval_vec = rename_vector_from_cumulative(
+ vector=vector, as_rate=False
+ )
+
+ avgrate_split = avgrate_vec.split(":")
+ interval_split = interval_vec.split(":")
+
+ add_vector_to_vector_selector_data(
+ self._vector_selector_base_data,
+ avgrate_vec,
+ f"{simulation_vector_description(avgrate_split[0])} ({avgrate_vec})",
+ )
+ add_vector_to_vector_selector_data(
+ self._vector_selector_base_data,
+ interval_vec,
+ f"{simulation_vector_description(interval_split[0])} ({interval_vec})",
+ )
+
+ # Retreive predefined expressions from configuration and validate
+ self._predefined_expressions_path = (
+ None
+ if predefined_expressions is None
+ else webviz_settings.shared_settings["predefined_expressions"][
+ predefined_expressions
+ ]
+ )
+ self._predefined_expressions = expressions_from_config(
+ get_path(self._predefined_expressions_path)
+ if self._predefined_expressions_path
+ else None
+ )
+ for expression in self._predefined_expressions:
+ valid, message = validate_predefined_expression(
+ expression, self._vector_selector_base_data
+ )
+ if not valid:
+ warnings.warn(message)
+ expression["isValid"] = valid
+
+ # Create initial vector selector data with predefined expressions
+ self._initial_vector_selector_data = copy.deepcopy(
+ self._vector_selector_base_data
+ )
+ add_expressions_to_vector_selector_data(
+ self._initial_vector_selector_data, self._predefined_expressions
+ )
+
+ plot_options = options if options else {}
+ self._initial_visualization_selection = VisualizationOptions(
+ plot_options.get("visualization", "statistics")
+ )
+ self._initial_vectors: List[str] = []
+ if "vectors" not in plot_options:
+ self._initial_vectors = []
+ for vector in [
+ vector
+ for vector in ["vector1", "vector2", "vector3"]
+ if vector in plot_options
+ ]:
+ self._initial_vectors.append(plot_options[vector])
+ self._initial_vectors = self._initial_vectors[:3]
+
+ # Set callbacks
+ self.set_callbacks(app)
+
+ @property
+ def layout(self) -> wcc.FlexBox:
+ return main_layout(
+ get_uuid=self.uuid,
+ ensemble_names=self._input_provider_set.names(),
+ vector_selector_data=self._initial_vector_selector_data,
+ vector_calculator_data=self._vector_calculator_data,
+ predefined_expressions=self._predefined_expressions,
+ disable_resampling_dropdown=self._presampled_frequency is not None,
+ selected_resampling_frequency=self._sampling,
+ selected_visualization=self._initial_visualization_selection,
+ selected_vectors=self._initial_vectors,
+ )
+
+ def set_callbacks(self, app: dash.Dash) -> None:
+ plugin_callbacks(
+ app=app,
+ get_uuid=self.uuid,
+ get_data_output=self.plugin_data_output,
+ get_data_requested=self.plugin_data_requested,
+ input_provider_set=self._input_provider_set,
+ theme=self._theme,
+ initial_selected_vectors=self._initial_vectors,
+ vector_selector_base_data=self._vector_selector_base_data,
+ observations=self._observations,
+ line_shape_fallback=self._line_shape_fallback,
+ )
+
+ @property
+ def tour_steps(self) -> List[dict]:
+ return [
+ {
+ "id": self.uuid(LayoutElements.TOUR_STEP_MAIN_LAYOUT),
+ "content": "Dashboard displaying reservoir simulation time series.",
+ },
+ {
+ "id": self.uuid(LayoutElements.GRAPH),
+ "content": (
+ "Visualization of selected time series. "
+ "Different options can be set in the menu to the left."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.TOUR_STEP_SETTINGS_LAYOUT),
+ "content": (
+ "Settings to configure data and layout of the time series visualization."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.TOUR_STEP_GROUP_BY),
+ "content": (
+ "Setting to group visualization data according to selection. "
+ "Subplot per selected vector or per selected ensemble."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.RESAMPLING_FREQUENCY_DROPDOWN),
+ "content": (
+ "Select resampling frequency for the time series data. "
+ "With presampled data, the dropdown is disabled and the presampling "
+ "frequency shown."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.ENSEMBLES_DROPDOWN),
+ "content": (
+ "Display time series from one or several ensembles. "
+ "Ensembles will be overlain in subplot or represented as subplot, "
+ 'based on selection in "Group By".'
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.TOUR_STEP_DELTA_ENSEMBLE),
+ "content": (
+ "Create delta ensembles (A-B). "
+ "Define delta between two ensembles and make available among "
+ "selectable ensembles."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.VECTOR_SELECTOR),
+ "content": (
+ "Display up to three different time series. "
+ "Each time series will be visualized in a separate plot. "
+ "Vectors prefixed with AVG_ and INTVL_ are calculated in the fly "
+ "from cumulative vectors, providing average rates and interval cumulatives "
+ "over a time interval from the selected resampling frequency. Vectors "
+ "categorized as calculated are created using the Vector Calculator below."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.VECTOR_CALCULATOR_OPEN_BUTTON),
+ "content": (
+ "Create mathematical expressions with provided vector time series. "
+ "Parsing of the mathematical expression is handled and will give feedback "
+ "when entering invalid expressions. "
+ "The expressions are calculated on the fly and can be selected among the time "
+ "series to be shown in the visualization."
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.TOUR_STEP_VISUALIZATION),
+ "content": (
+ "Choose between different visualizations. 1. Show time series as "
+ "individual lines per realization. 2. Show statistical lines per "
+ "ensemble. 3. Show statistical fanchart per ensemble"
+ ),
+ },
+ {
+ "id": self.uuid(LayoutElements.TOUR_STEP_OPTIONS),
+ "content": (
+ "Various plot options: Whether to include history trace or vector observations "
+ "and which statistics to show if statistical lines or fanchart is chosen as "
+ "visualization."
+ ),
+ },
+ ]
+
+ def add_webvizstore(self) -> List[Tuple[Callable, list]]:
+ functions: List[Tuple[Callable, list]] = []
+ if self._obsfile:
+ functions.append((get_path, [{"path": self._obsfile}]))
+ if self._predefined_expressions_path:
+ functions.append((get_path, [{"path": self._predefined_expressions_path}]))
+ return functions
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/__init__.py b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/__init__.py
new file mode 100644
index 000000000..8d07bd3cd
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/__init__.py
@@ -0,0 +1,3 @@
+from .ensemble_subplot_builder import EnsembleSubplotBuilder
+from .graph_figure_builder_base import GraphFigureBuilderBase
+from .vector_subplot_builder import VectorSubplotBuilder
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/ensemble_subplot_builder.py b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/ensemble_subplot_builder.py
new file mode 100644
index 000000000..4d06f6513
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/ensemble_subplot_builder.py
@@ -0,0 +1,338 @@
+from typing import Dict, List, Optional, Set
+
+import pandas as pd
+from plotly.subplots import make_subplots
+from webviz_config._theme_class import WebvizConfigTheme
+
+from webviz_subsurface._providers import Frequency
+
+from ..types import FanchartOptions, StatisticsOptions
+from ..utils.create_vector_traces_utils import (
+ create_history_vector_trace,
+ create_vector_fanchart_traces,
+ create_vector_observation_traces,
+ create_vector_realization_traces,
+ create_vector_statistics_traces,
+ render_hovertemplate,
+)
+from .graph_figure_builder_base import GraphFigureBuilderBase
+
+
+class EnsembleSubplotBuilder(GraphFigureBuilderBase):
+ """
+ Figure builder for creating/building serializable Output property data
+ for the callback. Where vector traces are added per ensemble, and subplots
+ are categorized per ensemble among selected ensembles.
+
+ Contains functions for adding titles, graph data and retreving the serialized
+ data for callback Output property.
+
+ `Input:`
+ * selected_vectors: List[str] - list of selected vector names
+ * selected_ensembles: List[str] - list of selected ensemble names
+ * vector_colors: dict - Dictionary with vector name as key and graph color as value
+ * sampling_frequency: Optional[Frequency] - Sampling frequency of data
+ * vector_line_shapes: Dict[str,str] - Dictionary of vector names and line shapes
+ * theme: Optional[WebvizConfigTheme] = None - Theme for plugin, given to graph figure
+ * line_shape_fallback: str = "linear" - Lineshape fallback
+ """
+
+ def __init__(
+ self,
+ selected_vectors: List[str],
+ selected_ensembles: List[str],
+ vector_colors: dict,
+ sampling_frequency: Optional[Frequency],
+ vector_line_shapes: Dict[str, str],
+ theme: Optional[WebvizConfigTheme] = None,
+ line_shape_fallback: str = "linear",
+ ) -> None:
+ # Init for base class
+ super().__init__()
+
+ self._selected_vectors = selected_vectors
+ self._selected_ensembles = selected_ensembles
+ self._vector_colors = vector_colors
+ self._sampling_frequency = sampling_frequency
+ self._line_shape_fallback = line_shape_fallback
+ self._vector_line_shapes = vector_line_shapes
+ self._history_vector_color = "black"
+
+ # Overwrite graph figure widget
+ self._figure = make_subplots(
+ rows=max(1, len(self._selected_ensembles)),
+ cols=1,
+ shared_xaxes=True,
+ vertical_spacing=0.05,
+ subplot_titles=[f'Ensemble: "{elm}"' for elm in self._selected_ensembles],
+ )
+ if theme:
+ self._figure.update_layout(
+ theme.create_themed_layout(self._figure.to_dict().get("layout", {}))
+ )
+ self._set_keep_uirevision()
+
+ # Set for storing added vectors
+ self._added_vector_traces: Set[str] = set()
+
+ # Status for added history vectors
+ self._added_history_trace = False
+
+ #############################################################################
+ #
+ # Public methods
+ #
+ #############################################################################
+
+ def create_graph_legends(self) -> None:
+ # Add legends for selected vectors - sort according to selected vectors
+ # NOTE: sorted() with key=self._selected_vectors.index requires that all of
+ # vectors in self._added_vector_traces set exist in self._selected_vectors list!
+ added_vector_traces = sorted(
+ self._added_vector_traces, key=self._selected_vectors.index
+ )
+ for index, vector in enumerate(added_vector_traces, start=1):
+ vector_legend_trace = {
+ "name": vector,
+ "x": [None],
+ "y": [None],
+ "legendgroup": vector,
+ "showlegend": True,
+ "visible": True,
+ "mode": "lines",
+ "line": {
+ "color": self._vector_colors.get(vector, "black"),
+ "shape": self._vector_line_shapes.get(
+ vector, self._line_shape_fallback
+ ),
+ },
+ "legendrank": index,
+ }
+ self._figure.add_trace(vector_legend_trace, row=1, col=1)
+
+ # Add legend for history trace with legendrank after vectors
+ if self._added_history_trace:
+ history_legend_trace = {
+ "name": "History",
+ "x": [None],
+ "y": [None],
+ "legendgroup": "History",
+ "showlegend": True,
+ "visible": True,
+ "mode": "lines",
+ "line": {
+ "color": self._history_vector_color,
+ },
+ "legendrank": len(self._added_vector_traces) + 1,
+ }
+ self._figure.add_trace(
+ trace=history_legend_trace,
+ row=1,
+ col=1,
+ )
+
+ def add_realizations_traces(
+ self,
+ vectors_df: pd.DataFrame,
+ ensemble: str,
+ ) -> None:
+ # Dictionary with vector name as key and list of ensemble traces as value
+ vector_traces_set: Dict[str, List[dict]] = {}
+
+ # Get vectors - order not important
+ vectors: Set[str] = set(vectors_df.columns) - set(["DATE", "REAL"])
+ self._validate_vectors_are_selected(vectors)
+
+ for vector in vectors:
+ self._added_vector_traces.add(vector)
+
+ vector_df = vectors_df[["DATE", "REAL", vector]]
+ color = self._vector_colors.get(vector, "black")
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_traces_set[vector] = create_vector_realization_traces(
+ vector_df=vector_df,
+ ensemble=ensemble,
+ legend_group=vector,
+ color=color,
+ line_shape=line_shape,
+ hovertemplate=render_hovertemplate(vector, self._sampling_frequency),
+ )
+
+ # Add traces to figure
+ self._add_vector_traces_set_to_figure(vector_traces_set, ensemble)
+
+ def add_statistics_traces(
+ self,
+ vectors_statistics_df: pd.DataFrame,
+ ensemble: str,
+ statistics_options: List[StatisticsOptions],
+ ) -> None:
+ # Dictionary with vector name as key and list of ensemble traces as value
+ vector_traces_set: Dict[str, List[dict]] = {}
+
+ # Get vectors - order not important
+ vectors: Set[str] = set(
+ vectors_statistics_df.columns.get_level_values(0)
+ ) - set(["DATE"])
+ self._validate_vectors_are_selected(vectors)
+
+ for vector in vectors:
+ self._added_vector_traces.add(vector)
+
+ # Retrieve DATE and statistics columns for specific vector
+ vector_statistics_df = pd.DataFrame(vectors_statistics_df["DATE"]).join(
+ vectors_statistics_df[vector]
+ )
+ color = self._vector_colors.get(vector, "black")
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_traces_set[vector] = create_vector_statistics_traces(
+ vector_statistics_df=vector_statistics_df,
+ color=color,
+ legend_group=vector,
+ line_shape=line_shape,
+ statistics_options=statistics_options,
+ hovertemplate=render_hovertemplate(vector, self._sampling_frequency),
+ )
+
+ # Add traces to figure
+ self._add_vector_traces_set_to_figure(vector_traces_set, ensemble)
+
+ def add_fanchart_traces(
+ self,
+ vectors_statistics_df: pd.DataFrame,
+ ensemble: str,
+ fanchart_options: List[FanchartOptions],
+ ) -> None:
+ # Dictionary with vector name as key and list of ensemble traces as value
+ vector_traces_set: Dict[str, List[dict]] = {}
+
+ # Get vectors - order not important!
+ vectors: Set[str] = set(
+ vectors_statistics_df.columns.get_level_values(0)
+ ) - set(["DATE"])
+ self._validate_vectors_are_selected(vectors)
+
+ for vector in vectors:
+ self._added_vector_traces.add(vector)
+
+ # Retrieve DATE and statistics columns for specific vector
+ vector_statistics_df = pd.DataFrame(vectors_statistics_df["DATE"]).join(
+ vectors_statistics_df[vector]
+ )
+ color = self._vector_colors.get(vector, "black")
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_traces_set[vector] = create_vector_fanchart_traces(
+ vector_statistics_df=vector_statistics_df,
+ color=color,
+ legend_group=vector,
+ line_shape=line_shape,
+ fanchart_options=fanchart_options,
+ hovertemplate=render_hovertemplate(vector, self._sampling_frequency),
+ )
+
+ # Add traces to figure
+ self._add_vector_traces_set_to_figure(vector_traces_set, ensemble)
+
+ def add_history_traces(
+ self,
+ vectors_df: pd.DataFrame,
+ ensemble: str,
+ ) -> None:
+ if "DATE" not in vectors_df.columns and "REAL" not in vectors_df.columns:
+ raise ValueError('vectors_df is missing required columns ["DATE","REAL"]')
+
+ if ensemble is None:
+ raise ValueError(
+ "Must provide ensemble argument of type str for this implementation!"
+ )
+
+ samples = vectors_df["DATE"].tolist()
+ vector_trace_set: Dict[str, dict] = {}
+ vectors: Set[str] = set(vectors_df.columns) - set(["DATE", "REAL"])
+ self._validate_vectors_are_selected(vectors)
+
+ for vector in vectors:
+ # Set status for added history trace
+ self._added_history_trace = True
+
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_trace_set[vector] = create_history_vector_trace(
+ samples,
+ vectors_df[vector].values,
+ line_shape=line_shape,
+ color=self._history_vector_color,
+ vector_name=vector,
+ )
+ self._add_vector_trace_set_to_figure(vector_trace_set, ensemble)
+
+ def add_vector_observations(
+ self, vector_name: str, vector_observations: dict
+ ) -> None:
+ if vector_name not in self._selected_vectors:
+ raise ValueError(f"Vector {vector_name} not among selected vectors!")
+
+ vector_observations_traces_set = {
+ vector_name: create_vector_observation_traces(
+ vector_observations, vector_name
+ )
+ }
+ for ensemble in self._selected_ensembles:
+ self._add_vector_traces_set_to_figure(
+ vector_observations_traces_set, ensemble
+ )
+
+ #############################################################################
+ #
+ # Private methods
+ #
+ #############################################################################
+
+ def _set_keep_uirevision(
+ self,
+ ) -> None:
+ # Keep uirevision (e.g. zoom) for unchanged data.
+ self._figure.update_xaxes(uirevision="locked") # Time axis state kept
+ for i, owner in enumerate(self._selected_ensembles, start=1):
+ self._figure.update_yaxes(row=i, col=1, uirevision=owner)
+
+ def _add_vector_trace_set_to_figure(
+ self, vector_trace_set: Dict[str, dict], ensemble: Optional[str] = None
+ ) -> None:
+ for trace in vector_trace_set.values():
+ subplot_index = (
+ self._selected_ensembles.index(ensemble) + 1
+ if ensemble in self._selected_ensembles
+ else None
+ )
+ if subplot_index is None:
+ continue
+ self._figure.add_trace(trace, row=subplot_index, col=1)
+
+ def _add_vector_traces_set_to_figure(
+ self, vector_traces_set: Dict[str, List[dict]], ensemble: Optional[str] = None
+ ) -> None:
+ for vector_traces in vector_traces_set.values():
+ subplot_index = (
+ self._selected_ensembles.index(ensemble) + 1
+ if ensemble in self._selected_ensembles
+ else None
+ )
+ if subplot_index is None:
+ continue
+ self._figure.add_traces(vector_traces, rows=subplot_index, cols=1)
+
+ def _validate_vectors_are_selected(self, vectors: Set[str]) -> None:
+ """Validate set of vectors are among selected vectors
+
+ Check if vectors are among selected vectors for figure builder, raise
+ ValueError if not.
+
+ `Input:`
+ * vectors: Set[str] - set of vector names to verify
+ """
+ for vector in vectors:
+ if vector not in self._selected_vectors:
+ raise ValueError(
+ f'Vector "{vector}" does not exist among selected vectors: '
+ f"{self._selected_vectors}"
+ )
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/graph_figure_builder_base.py b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/graph_figure_builder_base.py
new file mode 100644
index 000000000..814f718af
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/graph_figure_builder_base.py
@@ -0,0 +1,164 @@
+import abc
+from typing import Dict, List, Optional
+
+import pandas as pd
+import plotly.graph_objects as go
+
+from ..types import FanchartOptions, StatisticsOptions
+
+
+class GraphFigureBuilderBase(abc.ABC):
+ """
+ Base class for creating/building serializable Output property data
+ for the callback. Has functionality for creating various plot traces, where
+ the class inheriting the base is responsible to retrieve the data and place
+ correct in graph figure - e.g. place traces in correct subplots, set correct
+ titles, legends and so on.
+
+ Contains interface for adding graph data and retreving the serialized data
+ for callback Output property.
+
+ Contains self._figure, an empty FigureWidget to either use or override
+ """
+
+ def __init__(self) -> None:
+ self._figure = go.Figure()
+
+ # ------------------------------------
+ #
+ # Public functions
+ #
+ # ------------------------------------
+
+ def get_serialized_figure(self) -> dict:
+ """
+ Get the built figure on a JSON serialized format - i.e. a dictionary
+ """
+ return self._figure.to_dict()
+
+ @abc.abstractmethod
+ def create_graph_legends(self) -> None:
+ """Create legends for graphs after trace data is added"""
+ ...
+
+ @abc.abstractmethod
+ def add_realizations_traces(
+ self,
+ vectors_df: pd.DataFrame,
+ ensemble: str,
+ ) -> None:
+ """Add realization traces to figure
+
+ `Input:`
+ * vectors_df: pd.Dataframe - Dataframe with columns:
+ ["DATE", "REAL", vector1, ..., vectorN]
+
+ * ensemble: str - Name of ensemble providing the input vector data
+ """
+ ...
+
+ @abc.abstractmethod
+ def add_statistics_traces(
+ self,
+ vectors_statistics_df: pd.DataFrame,
+ ensemble: str,
+ statistics_options: List[StatisticsOptions],
+ ) -> None:
+ """Add statistics traces to figure
+
+ `Input:`
+ * vector_statistics_df: pd.Dataframe - Dataframe with double column level:\n
+ [ "DATE", vector1, ... vectorN
+ MEAN, MIN, MAX, P10, P90, P50 ... MEAN, MIN, MAX, P10, P90, P50]
+
+ * ensemble: str - Name of ensemble providing the input vector data
+ * statistics_options: List[StatisticsOptions] - List of statistics options traces to include
+ """
+ ...
+
+ @abc.abstractmethod
+ def add_fanchart_traces(
+ self,
+ vectors_statistics_df: pd.DataFrame,
+ ensemble: str,
+ fanchart_options: List[FanchartOptions],
+ ) -> None:
+ """
+ Add fanchart traces for vectors in provided vectors statistics dataframe
+
+ `Input:`
+ * vector_statistics_df: pd.Dataframe - Dataframe with double column level:\n
+ [ "DATE", vector1, ... vectorN
+ MEAN, MIN, MAX, P10, P90, P50 ... MEAN, MIN, MAX, P10, P90, P50]
+
+ * ensemble: str - Name of ensemble providing the input vector data
+ * fanchart_options: List[StatisticsOptions] - List of fanchart options traces to include
+ """
+ ...
+
+ @abc.abstractmethod
+ def add_history_traces(
+ self,
+ vectors_df: pd.DataFrame,
+ ensemble: str,
+ ) -> None:
+ """Add traces for historical vectors in dataframe columns
+
+ `Input:`
+ * vectors_df: pd.Dataframe - Dataframe with non-historical vector names in columns and their
+ historical data in rows. With columns:\n
+ ["DATE", "REAL", vector1, ..., vectorN]
+
+ * ensemble: str - Name of ensemble providing the input vector data
+ """
+ ...
+
+ @abc.abstractmethod
+ def add_vector_observations(
+ self, vector_name: str, vector_observations: dict
+ ) -> None:
+ """Add traces for vector observations
+
+ `Input:`
+ * vector_name: str - Vector to add observations for
+ * vector_observations: dict - Dictionary with observation data for vector
+ """
+ ...
+
+ # ------------------------------------
+ #
+ # Private functions
+ #
+ # ------------------------------------
+
+ @abc.abstractmethod
+ def _add_vector_traces_set_to_figure(
+ self, vector_traces_set: Dict[str, List[dict]], ensemble: Optional[str] = None
+ ) -> None:
+ """
+ Add list of vector line traces to figure.
+
+ Places line traces for specified vector into correct subplot of figure
+
+ `Input:`
+ * vector_traces_set: Dict[str, List[dict]] - Dictionary with vector names and list
+ of vector line traces for figure.
+ * ensemble: str - Optional name of ensemble providing the input vector data
+ """
+ ...
+
+ @abc.abstractmethod
+ def _add_vector_trace_set_to_figure(
+ self, vector_trace_set: Dict[str, dict], ensemble: Optional[str] = None
+ ) -> None:
+ """
+ Add vector line trace to figure
+
+ Places line trace for specified vector into correct subplot of figure
+
+ `Input:`
+ * vector_trace_set: Dict[str, dict] - Dictionary with vector name and single
+ vector line trace for figure.
+ * ensemble: str - Optional name of ensemble providing the input vector data
+ """
+ ...
diff --git a/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/vector_subplot_builder.py b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/vector_subplot_builder.py
new file mode 100644
index 000000000..0cd98e90d
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/_property_serialization/vector_subplot_builder.py
@@ -0,0 +1,339 @@
+from typing import Dict, List, Optional, Set
+
+import pandas as pd
+from plotly.subplots import make_subplots
+from webviz_config._theme_class import WebvizConfigTheme
+
+from webviz_subsurface._providers import Frequency
+
+from ..types import FanchartOptions, StatisticsOptions
+from ..utils.create_vector_traces_utils import (
+ create_history_vector_trace,
+ create_vector_fanchart_traces,
+ create_vector_observation_traces,
+ create_vector_realization_traces,
+ create_vector_statistics_traces,
+ render_hovertemplate,
+)
+from .graph_figure_builder_base import GraphFigureBuilderBase
+
+
+class VectorSubplotBuilder(GraphFigureBuilderBase):
+ """
+ Figure builder for creating/building serializable Output property data
+ for the callback. Where vector traces are added per ensemble, and subplots
+ are categorized per vector among selected vectors.
+
+ Contains functions for adding titles, graph data and retreving the serialized
+ data for callback Output property.
+
+ `Input:`
+ * selected_vectors: List[str] - list of selected vector names
+ * vector_titles: Dict[str, str] - Dictionary with vector names as keys and plot titles as values
+ * ensemble_colors: dict - Dictionary with ensemble names as keys and graph colors as values
+ * sampling_frequency: Optional[Frequency] - Sampling frequency of data
+ * vector_line_shapes: Dict[str,str] - Dictionary of vector names and line shapes
+ * theme: Optional[WebvizConfigTheme] = None - Theme for plugin, given to graph figure
+ * line_shape_fallback: str = "linear" - Lineshape fallback
+ """
+
+ def __init__(
+ self,
+ selected_vectors: List[str],
+ vector_titles: Dict[str, str],
+ ensemble_colors: dict,
+ sampling_frequency: Optional[Frequency],
+ vector_line_shapes: Dict[str, str],
+ theme: Optional[WebvizConfigTheme] = None,
+ line_shape_fallback: str = "linear",
+ ) -> None:
+ # Init for base class
+ super().__init__()
+
+ self._selected_vectors = selected_vectors
+ self._ensemble_colors = ensemble_colors
+ self._sampling_frequency = sampling_frequency
+ self._vector_line_shapes = vector_line_shapes
+ self._line_shape_fallback = line_shape_fallback
+ self._history_vector_color = "black"
+
+ # Overwrite graph figure widget
+ self._figure = make_subplots(
+ rows=max(1, len(self._selected_vectors)),
+ cols=1,
+ shared_xaxes=True,
+ vertical_spacing=0.05,
+ subplot_titles=[vector_titles.get(elm, elm) for elm in selected_vectors],
+ )
+ if theme:
+ self._figure.update_layout(
+ theme.create_themed_layout(self._figure.to_dict().get("layout", {}))
+ )
+
+ self._set_keep_uirevision()
+
+ # Set for storing added ensembles
+ self._added_ensemble_traces: List[str] = []
+
+ # Status for added history vectors
+ self._added_history_trace = False
+
+ #############################################################################
+ #
+ # Public methods
+ #
+ #############################################################################
+
+ def create_graph_legends(self) -> None:
+ # Add legends for added ensembles
+ for index, ensemble in enumerate(self._added_ensemble_traces, start=1):
+ ensemble_legend_trace = {
+ "name": ensemble,
+ "x": [None],
+ "y": [None],
+ "legendgroup": ensemble,
+ "showlegend": True,
+ "visible": True,
+ "mode": "lines",
+ "line": {
+ "color": self._ensemble_colors.get(ensemble, "black"),
+ },
+ "legendrank": index,
+ }
+ self._figure.add_trace(ensemble_legend_trace, row=1, col=1)
+
+ # Add legend for history trace with legendrank after vectors
+ if self._added_history_trace:
+ history_legend_trace = {
+ "name": "History",
+ "x": [None],
+ "y": [None],
+ "legendgroup": "History",
+ "showlegend": True,
+ "visible": True,
+ "mode": "lines",
+ "line": {
+ "color": self._history_vector_color,
+ },
+ "legendrank": len(self._added_ensemble_traces) + 1,
+ }
+ self._figure.add_trace(
+ trace=history_legend_trace,
+ row=1,
+ col=1,
+ )
+
+ def add_realizations_traces(
+ self,
+ vectors_df: pd.DataFrame,
+ ensemble: str,
+ ) -> None:
+ color = self._ensemble_colors.get(ensemble)
+ if not color:
+ raise ValueError(f'Ensemble "{ensemble}" is not present in colors dict!')
+
+ # Get vectors - order not important
+ vectors: Set[str] = set(vectors_df.columns) - set(["DATE", "REAL"])
+ self._validate_vectors_are_selected(vectors)
+
+ # Dictionary with vector name as key and list of ensemble traces as value
+ vector_traces_set: Dict[str, List[dict]] = {}
+
+ for vector in vectors:
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_traces_set[vector] = create_vector_realization_traces(
+ vector_df=vectors_df[["DATE", "REAL", vector]],
+ ensemble=ensemble,
+ legend_group=ensemble,
+ color=color,
+ line_shape=line_shape,
+ hovertemplate=render_hovertemplate(vector, self._sampling_frequency),
+ )
+
+ # If vector data is added for ensemble
+ if vector_traces_set:
+ self._update_added_ensemble_traces_list(ensemble)
+
+ # Add traces to figure
+ self._add_vector_traces_set_to_figure(vector_traces_set)
+
+ def add_statistics_traces(
+ self,
+ vectors_statistics_df: pd.DataFrame,
+ ensemble: str,
+ statistics_options: List[StatisticsOptions],
+ ) -> None:
+ color = self._ensemble_colors.get(ensemble)
+ if not color:
+ raise ValueError(f'Ensemble "{ensemble}" is not present in colors dict!')
+
+ # Dictionary with vector name as key and list of ensemble traces as value
+ vector_traces_set: Dict[str, List[dict]] = {}
+
+ # Get vectors - order not important
+ vectors: Set[str] = set(
+ vectors_statistics_df.columns.get_level_values(0)
+ ) - set(["DATE"])
+ self._validate_vectors_are_selected(vectors)
+
+ for vector in vectors:
+ # Retrieve DATE and statistics columns for specific vector
+ vector_statistics_df = pd.DataFrame(vectors_statistics_df["DATE"]).join(
+ vectors_statistics_df[vector]
+ )
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_traces_set[vector] = create_vector_statistics_traces(
+ vector_statistics_df=vector_statistics_df,
+ color=color,
+ legend_group=ensemble,
+ line_shape=line_shape,
+ hovertemplate=render_hovertemplate(vector, self._sampling_frequency),
+ statistics_options=statistics_options,
+ )
+
+ # If vector data is added for ensemble
+ if vector_traces_set:
+ self._update_added_ensemble_traces_list(ensemble)
+
+ # Add traces to figure
+ self._add_vector_traces_set_to_figure(vector_traces_set)
+
+ def add_fanchart_traces(
+ self,
+ vectors_statistics_df: pd.DataFrame,
+ ensemble: str,
+ fanchart_options: List[FanchartOptions],
+ ) -> None:
+ color = self._ensemble_colors.get(ensemble)
+ if not color:
+ raise ValueError(f'Ensemble "{ensemble}" is not present in colors dict!')
+
+ # Dictionary with vector name as key and list of ensemble traces as value
+ vector_traces_set: Dict[str, List[dict]] = {}
+
+ # Get vectors - order not important
+ vectors: Set[str] = set(
+ vectors_statistics_df.columns.get_level_values(0)
+ ) - set(["DATE"])
+ self._validate_vectors_are_selected(vectors)
+
+ for vector in vectors:
+ # Retrieve DATE and statistics columns for specific vector
+ vector_statistics_df = pd.DataFrame(vectors_statistics_df["DATE"]).join(
+ vectors_statistics_df[vector]
+ )
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_traces_set[vector] = create_vector_fanchart_traces(
+ vector_statistics_df=vector_statistics_df,
+ color=color,
+ legend_group=ensemble,
+ line_shape=line_shape,
+ fanchart_options=fanchart_options,
+ hovertemplate=render_hovertemplate(vector, self._sampling_frequency),
+ )
+
+ # If vector data is added for ensemble
+ if vector_traces_set:
+ self._update_added_ensemble_traces_list(ensemble)
+
+ # Add traces to figure
+ self._add_vector_traces_set_to_figure(vector_traces_set)
+
+ def add_history_traces(
+ self,
+ vectors_df: pd.DataFrame,
+ __ensemble: Optional[str] = None,
+ ) -> None:
+ # NOTE: Not using ensemble argument for this implementation!
+
+ if "DATE" not in vectors_df.columns and "REAL" not in vectors_df.columns:
+ raise ValueError('vectors_df is missing required columns ["DATE","REAL"]')
+
+ # Get vectors - order not important
+ vectors: Set[str] = set(vectors_df.columns) - set(["DATE", "REAL"])
+ self._validate_vectors_are_selected(vectors)
+
+ samples = vectors_df["DATE"].tolist()
+
+ vector_trace_set: Dict[str, dict] = {}
+ for vector in vectors:
+ # Set status for added history trace
+ self._added_history_trace = True
+
+ line_shape = self._vector_line_shapes.get(vector, self._line_shape_fallback)
+ vector_trace_set[vector] = create_history_vector_trace(
+ samples,
+ vectors_df[vector].values,
+ line_shape=line_shape,
+ )
+
+ self._add_vector_trace_set_to_figure(vector_trace_set)
+
+ def add_vector_observations(
+ self, vector_name: str, vector_observations: dict
+ ) -> None:
+ if vector_name not in self._selected_vectors:
+ raise ValueError(f"Vector {vector_name} not among selected vectors!")
+
+ self._add_vector_traces_set_to_figure(
+ {vector_name: create_vector_observation_traces(vector_observations)}
+ )
+
+ #############################################################################
+ #
+ # Private methods
+ #
+ #############################################################################
+
+ def _set_keep_uirevision(self) -> None:
+ # Keep uirevision (e.g. zoom) for unchanged data.
+ self._figure.update_xaxes(uirevision="locked") # Time axis state kept
+ for i, vector in enumerate(self._selected_vectors, start=1):
+ self._figure.update_yaxes(row=i, col=1, uirevision=vector)
+
+ def _add_vector_trace_set_to_figure(
+ self, vector_trace_set: Dict[str, dict], __ensemble: Optional[str] = None
+ ) -> None:
+ for vector, trace in vector_trace_set.items():
+ subplot_index = (
+ self._selected_vectors.index(vector) + 1
+ if vector in self._selected_vectors
+ else None
+ )
+ if subplot_index is None:
+ continue
+ self._figure.add_trace(trace, row=subplot_index, col=1)
+
+ def _add_vector_traces_set_to_figure(
+ self, vector_traces_set: Dict[str, List[dict]], __ensemble: Optional[str] = None
+ ) -> None:
+ for vector, traces in vector_traces_set.items():
+ subplot_index = (
+ self._selected_vectors.index(vector) + 1
+ if vector in self._selected_vectors
+ else None
+ )
+ if subplot_index is None:
+ continue
+ self._figure.add_traces(traces, rows=subplot_index, cols=1)
+
+ def _update_added_ensemble_traces_list(self, ensemble: str) -> None:
+ """Update added ensemble traces list, to prevent duplicates in list"""
+ if ensemble not in self._added_ensemble_traces:
+ self._added_ensemble_traces.append(ensemble)
+
+ def _validate_vectors_are_selected(self, vectors: Set[str]) -> None:
+ """Validate set of vectors are among selected vectors
+
+ Check if vectors are among selected vectors for figure builder, raise
+ ValueError if not.
+
+ `Input:`
+ * vectors: Set[str] - set of vector names to verify
+ """
+ for vector in vectors:
+ if vector not in self._selected_vectors:
+ raise ValueError(
+ f'Vector "{vector}" does not exist among selected vectors: '
+ f"{self._selected_vectors}"
+ )
diff --git a/webviz_subsurface/plugins/_simulation_time_series/types/__init__.py b/webviz_subsurface/plugins/_simulation_time_series/types/__init__.py
new file mode 100644
index 000000000..adb8fdd9c
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/types/__init__.py
@@ -0,0 +1,14 @@
+from .derived_delta_ensemble_vectors_accessor_impl import (
+ DerivedDeltaEnsembleVectorsAccessorImpl,
+)
+from .derived_ensemble_vectors_accessor_impl import DerivedEnsembleVectorsAccessorImpl
+from .derived_vectors_accessor import DerivedVectorsAccessor
+from .provider_set import ProviderSet
+from .types import (
+ DeltaEnsemble,
+ FanchartOptions,
+ StatisticsOptions,
+ SubplotGroupByOptions,
+ TraceOptions,
+ VisualizationOptions,
+)
diff --git a/webviz_subsurface/plugins/_simulation_time_series/types/derived_delta_ensemble_vectors_accessor_impl.py b/webviz_subsurface/plugins/_simulation_time_series/types/derived_delta_ensemble_vectors_accessor_impl.py
new file mode 100644
index 000000000..d648668d6
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/types/derived_delta_ensemble_vectors_accessor_impl.py
@@ -0,0 +1,311 @@
+from typing import List, Optional, Sequence, Tuple
+
+import pandas as pd
+from webviz_subsurface_components import ExpressionInfo
+
+from webviz_subsurface._providers import EnsembleSummaryProvider, Frequency
+from webviz_subsurface._utils.vector_calculator import (
+ create_calculated_vector_df,
+ get_selected_expressions,
+)
+
+from ..utils.from_timeseries_cumulatives import (
+ calculate_from_resampled_cumulative_vectors_df,
+ get_cumulative_vector_name,
+ is_interval_or_average_vector,
+)
+from .derived_vectors_accessor import DerivedVectorsAccessor
+
+
+class DerivedDeltaEnsembleVectorsAccessorImpl(DerivedVectorsAccessor):
+ """
+ Class to create derived vector data and access these for a delta ensemble.
+
+ The delta ensemble is represented a pair of two ensemble summary providers.
+
+ A sequence of vector names are provided, and data is fetched or created based on which
+ type of vectors are present in the sequence.
+
+ Vector names can be regular vectors existing among vector names in the providers, Interval
+ Delta/Average rate vector or a calculated vector from vector calculator.
+
+ Based on the vector type, the class provides an interface for retrieveing dataframes
+ for the set of such vectors for the provider.
+ """
+
+ def __init__(
+ self,
+ name: str,
+ provider_pair: Tuple[EnsembleSummaryProvider, EnsembleSummaryProvider],
+ vectors: Sequence[str],
+ expressions: Optional[List[ExpressionInfo]] = None,
+ resampling_frequency: Optional[Frequency] = None,
+ ) -> None:
+ self._name = name
+
+ if len(provider_pair) != 2:
+ raise ValueError(
+ 'Expect input argument "provider_pair" to have two providers!'
+ f"Got {len(provider_pair)}"
+ )
+ self._provider_a = provider_pair[0]
+ self._provider_b = provider_pair[1]
+
+ if (
+ self._provider_a.supports_resampling()
+ != self._provider_b.supports_resampling()
+ ):
+ raise ValueError(
+ f"Ensemble A and B must have same resampling support! "
+ f"Ensemble A support resampling: {self._provider_a.supports_resampling()} "
+ f"and Ensemble B support resampling: {self._provider_b.supports_resampling()}"
+ )
+
+ # All common vectors in providers
+ self._common_provider_vectors = [
+ elm
+ for elm in self._provider_a.vector_names()
+ if elm in self._provider_b.vector_names()
+ ]
+
+ # Categorize vector types among the vectors in argument
+ self._provider_vectors = [
+ vector for vector in vectors if vector in self._common_provider_vectors
+ ]
+ self._interval_and_average_vectors = [
+ vector
+ for vector in vectors
+ if is_interval_or_average_vector(vector)
+ and get_cumulative_vector_name(vector) in self._common_provider_vectors
+ ]
+ self._vector_calculator_expressions = (
+ get_selected_expressions(expressions, vectors)
+ if expressions is not None
+ else []
+ )
+
+ # Set resampling frequency
+ self._resampling_frequency = (
+ resampling_frequency
+ if self._provider_a.supports_resampling()
+ and self._provider_b.supports_resampling()
+ else None
+ )
+
+ def __create_delta_ensemble_vectors_df(
+ self,
+ vector_names: Sequence[str],
+ resampling_frequency: Optional[Frequency],
+ realizations: Optional[Sequence[int]] = None,
+ ) -> pd.DataFrame:
+ """
+ Get vectors dataframe with delta vectors for ensemble A and B, for common realizations
+
+ `Return:` Dataframe with delta ensemble data for common vectors and realizations in ensemble
+ A and B.
+
+ `Output:`
+ * DataFrame with columns ["DATE", "REAL", vector1, ..., vectorN]
+
+ `Input:`
+ * vector_names: Sequence[str] - Sequence of vector names to get data for
+ * resampling_frequency: Optional[Frequency] - Optional resampling frequency
+ * realizations: Optional[Sequence[int]] - Optional sequence of realization numbers for
+ vectors
+
+ NOTE:
+ - Performs "inner join". Only obtain matching index ["DATE", "REAL"] - i.e "DATE"-"REAL"
+ combination present in only one vector -> neglected
+ - Ensures equal dates samples and realizations by dropping nan-values
+ """
+
+ if not vector_names:
+ raise ValueError("List of requested vector names is empty")
+
+ # NOTE: index order ["REAL","DATE"] to obtain grouping by realization
+ # and order by date
+ ensemble_a_vectors_df = self._provider_a.get_vectors_df(
+ vector_names, resampling_frequency, realizations
+ ).set_index(["REAL", "DATE"])
+ ensemble_b_vectors_df = self._provider_b.get_vectors_df(
+ vector_names, resampling_frequency, realizations
+ ).set_index(["REAL", "DATE"])
+
+ # Reset index, group by "REAL" and sort groups by "DATE"
+ ensembles_delta_vectors_df = (
+ ensemble_a_vectors_df.sub(ensemble_b_vectors_df)
+ .reset_index()
+ .sort_values(["REAL", "DATE"])
+ )
+
+ return ensembles_delta_vectors_df.dropna(axis=0, how="any")
+
+ def has_provider_vectors(self) -> bool:
+ return len(self._provider_vectors) > 0
+
+ def has_interval_and_average_vectors(self) -> bool:
+ return len(self._interval_and_average_vectors) > 0
+
+ def has_vector_calculator_expressions(self) -> bool:
+ return len(self._vector_calculator_expressions) > 0
+
+ def get_provider_vectors_df(
+ self, realizations: Optional[Sequence[int]] = None
+ ) -> pd.DataFrame:
+ """
+ Get vectors dataframe with delta vectors for ensemble A and B, for common realizations and
+ selected vectors
+
+ `Return:` Dataframe with delta ensemble data for common vectors and realizations in ensemble
+ A and B.
+
+ `Output:`
+ * DataFrame with columns ["DATE", "REAL", vector1, ..., vectorN]
+ """
+ if not self.has_provider_vectors():
+ raise ValueError(
+ f'Vector data handler for provider "{self._name}" has no provider vectors'
+ )
+
+ return self.__create_delta_ensemble_vectors_df(
+ self._provider_vectors, self._resampling_frequency, realizations
+ )
+
+ def create_interval_and_average_vectors_df(
+ self,
+ realizations: Optional[Sequence[int]] = None,
+ ) -> pd.DataFrame:
+ """Get dataframe with interval delta and average rate vector data for provided vectors.
+
+ The returned dataframe contains columns with name of vector and corresponding interval delta
+ or average rate data.
+
+ Interval delta and average rate date is calculated with same sampling frequency as provider
+ is set with. I.e. resampling frequency is given for providers supporting resampling,
+ otherwise sampling frequency is fixed.
+
+ `Input:`
+ * realizations: Sequence[int] - Sequency of realization numbers to include in calculation
+
+ `Output:`
+ * dataframe with interval vector names in columns and their cumulative data in rows.
+ `Columns` in dataframe: ["DATE", "REAL", vector1, ..., vectorN]
+
+ ---------------------
+ `NOTE:`
+ * Handle calculation of cumulative when raw data is added
+ * See TODO in calculate_from_resampled_cumulative_vectors_df()
+ """
+ if not self.has_interval_and_average_vectors():
+ raise ValueError(
+ f'Vector data handler for provider "{self._name}" has no interval delta '
+ "and average rate vector names"
+ )
+
+ cumulative_vector_names = [
+ get_cumulative_vector_name(elm)
+ for elm in self._interval_and_average_vectors
+ if is_interval_or_average_vector(elm)
+ ]
+ cumulative_vector_names = list(sorted(set(cumulative_vector_names)))
+
+ vectors_df = self.__create_delta_ensemble_vectors_df(
+ cumulative_vector_names, self._resampling_frequency, realizations
+ )
+
+ interval_and_average_vectors_df = pd.DataFrame()
+ for vector_name in self._interval_and_average_vectors:
+ cumulative_vector_name = get_cumulative_vector_name(vector_name)
+ interval_and_average_vector_df = (
+ calculate_from_resampled_cumulative_vectors_df(
+ vectors_df[["DATE", "REAL", cumulative_vector_name]],
+ as_rate_per_day=vector_name.startswith("AVG_"),
+ )
+ )
+ if interval_and_average_vectors_df.empty:
+ interval_and_average_vectors_df = interval_and_average_vector_df
+ else:
+ interval_and_average_vectors_df = pd.merge(
+ interval_and_average_vectors_df,
+ interval_and_average_vector_df,
+ how="inner",
+ )
+
+ return interval_and_average_vectors_df
+
+ def create_calculated_vectors_df(
+ self, realizations: Optional[Sequence[int]] = None
+ ) -> pd.DataFrame:
+ """Get dataframe with calculated vector data for provided vectors.
+
+ The returned dataframe contains columns with name of vector and corresponding calculated
+ data.
+
+ The calculated vectors for delta ensembles are created by first creating the calculated
+ vector data for ensemble A and B separately, and thereafter subtracting the data in ensemble
+ B from A. Thereby one obtain creating the delta ensemble of the resulting calculated
+ vectors.
+
+ Calculated vectors are created with same sampling frequency as providers is set with. I.e.
+ resampling frequency is given for providers supporting resampling, otherwise sampling
+ frequency is fixed.
+
+ `Input:`
+ * realizations: Sequence[int] - Sequency of realization numbers to include in calculation
+
+ `Output:`
+ * dataframe with vector names in columns and their calculated data in rows
+ `Columns` in dataframe: ["DATE", "REAL", vector1, ..., vectorN]
+ """
+ if not self.has_vector_calculator_expressions():
+ raise ValueError(
+ f'Assembled vector data accessor for provider "{self._name}"'
+ "has no vector calculator expressions"
+ )
+
+ provider_a_calculated_vectors_df = pd.DataFrame()
+ provider_b_calculated_vectors_df = pd.DataFrame()
+ for expression in self._vector_calculator_expressions:
+ provider_a_calculated_vector_df = create_calculated_vector_df(
+ expression, self._provider_a, realizations, self._resampling_frequency
+ )
+ provider_b_calculated_vector_df = create_calculated_vector_df(
+ expression, self._provider_b, realizations, self._resampling_frequency
+ )
+
+ if (
+ provider_a_calculated_vector_df.empty
+ or provider_b_calculated_vector_df.empty
+ ):
+ # TODO: Consider raising ValueError of vector calculation in one provider fails?
+ # If both fails, it's okay?
+ continue
+
+ def __inner_merge_dataframes(
+ first: pd.DataFrame, second: pd.DataFrame
+ ) -> pd.DataFrame:
+ if first.empty:
+ return second
+ return pd.merge(first, second, how="inner")
+
+ provider_a_calculated_vectors_df = __inner_merge_dataframes(
+ provider_a_calculated_vectors_df, provider_a_calculated_vector_df
+ )
+ provider_b_calculated_vectors_df = __inner_merge_dataframes(
+ provider_b_calculated_vectors_df, provider_b_calculated_vector_df
+ )
+
+ # Use "REAL" and "DATE" as indices
+ # NOTE: index order ["REAL","DATE"] to obtain grouping by realization
+ # and order by date
+ provider_a_calculated_vectors_df.set_index(["REAL", "DATE"], inplace=True)
+ provider_b_calculated_vectors_df.set_index(["REAL", "DATE"], inplace=True)
+
+ # Reset index, group by "REAL" and sort groups by "DATE"
+ delta_ensemble_calculated_vectors_df = (
+ provider_a_calculated_vectors_df.sub(provider_b_calculated_vectors_df)
+ .reset_index()
+ .sort_values(["REAL", "DATE"])
+ )
+
+ return delta_ensemble_calculated_vectors_df.dropna(axis=0, how="any")
diff --git a/webviz_subsurface/plugins/_simulation_time_series/types/derived_ensemble_vectors_accessor_impl.py b/webviz_subsurface/plugins/_simulation_time_series/types/derived_ensemble_vectors_accessor_impl.py
new file mode 100644
index 000000000..f524f8996
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/types/derived_ensemble_vectors_accessor_impl.py
@@ -0,0 +1,184 @@
+from typing import List, Optional, Sequence
+
+import pandas as pd
+from webviz_subsurface_components import ExpressionInfo
+
+from webviz_subsurface._providers import EnsembleSummaryProvider, Frequency
+from webviz_subsurface._utils.vector_calculator import (
+ create_calculated_vector_df,
+ get_selected_expressions,
+)
+
+from ..utils.from_timeseries_cumulatives import (
+ calculate_from_resampled_cumulative_vectors_df,
+ get_cumulative_vector_name,
+ is_interval_or_average_vector,
+)
+from .derived_vectors_accessor import DerivedVectorsAccessor
+
+
+class DerivedEnsembleVectorsAccessorImpl(DerivedVectorsAccessor):
+ """
+ Class to create derived vector data and access these for a regular ensemble.
+
+ The ensemble is represented with an ensemble summary provider.
+
+ A sequence of vector names are provided, and data is fetched or created based on which
+ type of vectors are present in the sequence.
+
+ Vector names can be regular vectors existing among vector names in the provider, Interval
+ Delta/Average rate vector or a calculated vector from vector calculator.
+
+ Based on the vector type, the class provides an interface for retrieveing dataframes
+ for the set of such vectors for the provider.
+ """
+
+ def __init__(
+ self,
+ name: str,
+ provider: EnsembleSummaryProvider,
+ vectors: Sequence[str],
+ expressions: Optional[List[ExpressionInfo]] = None,
+ resampling_frequency: Optional[Frequency] = None,
+ ) -> None:
+ self._name = name
+ self._provider = provider
+ self._provider_vectors = [
+ vector for vector in vectors if vector in self._provider.vector_names()
+ ]
+ self._interval_and_average_vectors = [
+ vector
+ for vector in vectors
+ if is_interval_or_average_vector(vector)
+ and get_cumulative_vector_name(vector) in provider.vector_names()
+ ]
+ self._vector_calculator_expressions = (
+ get_selected_expressions(expressions, vectors)
+ if expressions is not None
+ else []
+ )
+ self._resampling_frequency = (
+ resampling_frequency if self._provider.supports_resampling() else None
+ )
+
+ def has_provider_vectors(self) -> bool:
+ return len(self._provider_vectors) > 0
+
+ def has_interval_and_average_vectors(self) -> bool:
+ return len(self._interval_and_average_vectors) > 0
+
+ def has_vector_calculator_expressions(self) -> bool:
+ return len(self._vector_calculator_expressions) > 0
+
+ def get_provider_vectors_df(
+ self, realizations: Optional[Sequence[int]] = None
+ ) -> pd.DataFrame:
+ """Get dataframe for the selected provider vectors"""
+ if not self.has_provider_vectors():
+ raise ValueError(
+ f'Vector data handler for provider "{self._name}" has no provider vectors'
+ )
+ return self._provider.get_vectors_df(
+ self._provider_vectors, self._resampling_frequency, realizations
+ )
+
+ def create_interval_and_average_vectors_df(
+ self,
+ realizations: Optional[Sequence[int]] = None,
+ ) -> pd.DataFrame:
+ """Get dataframe with interval delta and average rate vector data for provided vectors.
+
+ The returned dataframe contains columns with name of vector and corresponding interval delta
+ or average rate data.
+
+ Interval delta and average rate date is calculated with same sampling frequency as provider
+ is set with. I.e. resampling frequency is given for providers supporting resampling,
+ otherwise sampling frequency is fixed.
+
+ `Input:`
+ * realizations: Sequence[int] - Sequency of realization numbers to include in calculation
+
+ `Output:`
+ * dataframe with interval vector names in columns and their cumulative data in rows.
+ `Columns` in dataframe: ["DATE", "REAL", vector1, ..., vectorN]
+
+ ---------------------
+ `NOTE:`
+ * Handle calculation of cumulative when raw data is added
+ * See TODO in calculate_from_resampled_cumulative_vectors_df()
+ """
+ if not self.has_interval_and_average_vectors():
+ raise ValueError(
+ f'Vector data handler for provider "{self._name}" has no interval delta '
+ "and average rate vector names"
+ )
+
+ cumulative_vector_names = [
+ get_cumulative_vector_name(elm)
+ for elm in self._interval_and_average_vectors
+ if is_interval_or_average_vector(elm)
+ ]
+ cumulative_vector_names = list(sorted(set(cumulative_vector_names)))
+
+ vectors_df = self._provider.get_vectors_df(
+ cumulative_vector_names, self._resampling_frequency, realizations
+ )
+
+ interval_and_average_vectors_df = pd.DataFrame()
+ for vector_name in self._interval_and_average_vectors:
+ cumulative_vector_name = get_cumulative_vector_name(vector_name)
+ interval_and_average_vector_df = (
+ calculate_from_resampled_cumulative_vectors_df(
+ vectors_df[["DATE", "REAL", cumulative_vector_name]],
+ as_rate_per_day=vector_name.startswith("AVG_"),
+ )
+ )
+ if interval_and_average_vectors_df.empty:
+ interval_and_average_vectors_df = interval_and_average_vector_df
+ else:
+ interval_and_average_vectors_df = pd.merge(
+ interval_and_average_vectors_df,
+ interval_and_average_vector_df,
+ how="inner",
+ )
+
+ return interval_and_average_vectors_df
+
+ def create_calculated_vectors_df(
+ self, realizations: Optional[Sequence[int]] = None
+ ) -> pd.DataFrame:
+ """Get dataframe with calculated vector data for provided vectors.
+
+ The returned dataframe contains columns with name of vector and corresponding calculated
+ data.
+
+ Calculated vectors are created with same sampling frequency as provider is set with. I.e.
+ resampling frequency is given for providers supporting resampling, otherwise sampling
+ frequency is fixed.
+
+ `Input:`
+ * realizations: Sequence[int] - Sequency of realization numbers to include in calculation
+
+ `Output:`
+ * dataframe with vector names in columns and their calculated data in rows
+ `Columns` in dataframe: ["DATE", "REAL", vector1, ..., vectorN]
+ """
+ if not self.has_vector_calculator_expressions():
+ raise ValueError(
+ f'Assembled vector data accessor for provider "{self._name}"'
+ "has no vector calculator expressions"
+ )
+ calculated_vectors_df = pd.DataFrame()
+ for expression in self._vector_calculator_expressions:
+ calculated_vector_df = create_calculated_vector_df(
+ expression, self._provider, realizations, self._resampling_frequency
+ )
+ if calculated_vectors_df.empty:
+ calculated_vectors_df = calculated_vector_df
+ else:
+ calculated_vectors_df = pd.merge(
+ calculated_vectors_df,
+ calculated_vector_df,
+ how="inner",
+ )
+ return calculated_vectors_df
diff --git a/webviz_subsurface/plugins/_simulation_time_series/types/derived_vectors_accessor.py b/webviz_subsurface/plugins/_simulation_time_series/types/derived_vectors_accessor.py
new file mode 100644
index 000000000..31de8b221
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/types/derived_vectors_accessor.py
@@ -0,0 +1,37 @@
+import abc
+from typing import Optional, Sequence
+
+import pandas as pd
+
+
+class DerivedVectorsAccessor:
+ @abc.abstractmethod
+ def has_provider_vectors(self) -> bool:
+ ...
+
+ @abc.abstractmethod
+ def has_interval_and_average_vectors(self) -> bool:
+ ...
+
+ @abc.abstractmethod
+ def has_vector_calculator_expressions(self) -> bool:
+ ...
+
+ @abc.abstractmethod
+ def get_provider_vectors_df(
+ self, realizations: Optional[Sequence[int]] = None
+ ) -> pd.DataFrame:
+ ...
+
+ @abc.abstractmethod
+ def create_interval_and_average_vectors_df(
+ self,
+ realizations: Optional[Sequence[int]] = None,
+ ) -> pd.DataFrame:
+ ...
+
+ @abc.abstractmethod
+ def create_calculated_vectors_df(
+ self, realizations: Optional[Sequence[int]] = None
+ ) -> pd.DataFrame:
+ ...
diff --git a/webviz_subsurface/plugins/_simulation_time_series/types/provider_set.py b/webviz_subsurface/plugins/_simulation_time_series/types/provider_set.py
new file mode 100644
index 000000000..a7b251ba6
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/types/provider_set.py
@@ -0,0 +1,157 @@
+from pathlib import Path
+from typing import Dict, ItemsView, List, Optional
+
+from webviz_subsurface._providers import (
+ EnsembleSummaryProvider,
+ EnsembleSummaryProviderFactory,
+ Frequency,
+ VectorMetadata,
+)
+
+
+class ProviderSet:
+ """
+ Class to create a set of ensemble summary providers with unique names
+
+ Provides interface for read-only fetching of provider data
+ """
+
+ def __init__(self, provider_dict: Dict[str, EnsembleSummaryProvider]) -> None:
+ self._provider_dict = provider_dict.copy()
+ self._all_vector_names = self._create_union_of_vector_names_from_providers(
+ list(self._provider_dict.values())
+ )
+
+ def verify_consistent_vector_metadata(self) -> None:
+ """
+ Verify that vector metadata is consistent across providers, raise exception
+ if inconsistency occur.
+
+ TODO:
+ * Improve print of inconsistent metadata info - store all inconsistencies
+ and print, do not raise ValueError on first.
+ * Replace with vector metadata dataclass object when updated (__eq__ operator
+ for dataclass would be handy)
+ """
+
+ # Iterate through all vector names for provider set
+ for vector_name in self._all_vector_names:
+ # Store provider name and retrieved vector metadata for specific vector name
+ vector_provider_metadata_dict: Dict[str, Optional[VectorMetadata]] = {}
+
+ # Retrieve vector metadata from providers
+ for name, provider in self._provider_dict.items():
+ if vector_name in provider.vector_names():
+ vector_provider_metadata_dict[name] = provider.vector_metadata(
+ vector_name
+ )
+ if vector_provider_metadata_dict:
+ validator_provider, metadata_validator = list(
+ vector_provider_metadata_dict.items()
+ )[0]
+ for (
+ provider_name,
+ vector_metadata,
+ ) in vector_provider_metadata_dict.items():
+ if vector_metadata != metadata_validator:
+
+ raise ValueError(
+ f'Inconsistent vector metadata for vector "{vector_name}"'
+ f' between provider "{validator_provider}" and provider '
+ f'{provider_name}"'
+ )
+
+ @staticmethod
+ def _create_union_of_vector_names_from_providers(
+ providers: List[EnsembleSummaryProvider],
+ ) -> List[str]:
+ """Create list with the union of vector names among providers"""
+ vector_names = []
+ for provider in providers:
+ vector_names.extend(provider.vector_names())
+ vector_names = list(sorted(set(vector_names)))
+ return vector_names
+
+ def items(self) -> ItemsView[str, EnsembleSummaryProvider]:
+ return self._provider_dict.items()
+
+ def names(self) -> List[str]:
+ return list(self._provider_dict.keys())
+
+ def provider(self, name: str) -> EnsembleSummaryProvider:
+ if name not in self._provider_dict.keys():
+ raise ValueError(f'Provider with name "{name}" not present in set!')
+ return self._provider_dict[name]
+
+ def all_providers(self) -> List[EnsembleSummaryProvider]:
+ return list(self._provider_dict.values())
+
+ def all_vector_names(self) -> List[str]:
+ """Create list with the union of vector names among providers"""
+ return self._all_vector_names
+
+ def vector_metadata(self, vector: str) -> Optional[VectorMetadata]:
+ """Get vector metadata from first occurrence among providers,
+
+ `return:`
+ Vector metadata from first occurrence among providers, None if not existing
+ """
+ metadata: Optional[VectorMetadata] = next(
+ (
+ provider.vector_metadata(vector)
+ for provider in self._provider_dict.values()
+ if vector in provider.vector_names()
+ and provider.vector_metadata(vector)
+ ),
+ None,
+ )
+ return metadata
+
+
+def create_lazy_provider_set_from_paths(
+ name_path_dict: Dict[str, Path],
+ rel_file_pattern: str,
+) -> ProviderSet:
+ """Create set of providers with lazy (on-demand) resampling/interpolation, from
+ dictionary of ensemble name and corresponding arrow file paths
+
+ `Input:`
+ * name_path_dict: Dict[str, Path] - ensemble name as key and arrow file path as value
+
+ `Return:`
+ Provider set with ensemble summary providers with lazy (on-demand) resampling/interpolation
+ """
+ provider_factory = EnsembleSummaryProviderFactory.instance()
+ provider_dict: Dict[str, EnsembleSummaryProvider] = {}
+ for name, path in name_path_dict.items():
+ provider_dict[name] = provider_factory.create_from_arrow_unsmry_lazy(
+ str(path), rel_file_pattern
+ )
+ return ProviderSet(provider_dict)
+
+
+def create_presampled_provider_set_from_paths(
+ name_path_dict: Dict[str, Path],
+ rel_file_pattern: str,
+ presampling_frequency: Frequency,
+) -> ProviderSet:
+ """Create set of providers without lazy resampling, but with specified frequency, from
+ dictionary of ensemble name and corresponding arrow file paths
+
+ `Input:`
+ * name_path_dict: Dict[str, Path] - ensemble name as key and arrow file path as value
+ * presampling_frequency: Frequency - Frequency to sample input data in factory with, during
+ import.
+
+ `Return:`
+ Provider set with ensemble summary providers with presampled data according to specified
+ presampling frequency.
+ """
+ # TODO: Make presampling_frequency: Optional[Frequency] when allowing raw data for plugin
+ provider_factory = EnsembleSummaryProviderFactory.instance()
+ provider_dict: Dict[str, EnsembleSummaryProvider] = {}
+ for name, path in name_path_dict.items():
+ provider_dict[name] = provider_factory.create_from_arrow_unsmry_presampled(
+ str(path), rel_file_pattern, presampling_frequency
+ )
+ return ProviderSet(provider_dict)
diff --git a/webviz_subsurface/plugins/_simulation_time_series/types/types.py b/webviz_subsurface/plugins/_simulation_time_series/types/types.py
new file mode 100644
index 000000000..99d44147b
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/types/types.py
@@ -0,0 +1,68 @@
+import sys
+from enum import Enum
+
+if sys.version_info >= (3, 8):
+ from typing import TypedDict
+else:
+ from typing_extensions import TypedDict
+
+
+class DeltaEnsemble(TypedDict):
+ """Definition of delta ensemble
+
+ Pair of names representing a delta ensemble: A-B
+ """
+
+ ensemble_a: str
+ ensemble_b: str
+
+
+class FanchartOptions(str, Enum):
+ """
+ Type definition for statistical options for fanchart
+ """
+
+ MEAN = "Mean" # Mean value
+ MIN_MAX = "Min/Max" # Minimum and maximum pair
+ P10_P90 = "P10/P90" # P10 and P90 pair
+
+
+class StatisticsOptions(str, Enum):
+ """
+ Type definition for statistics options in simulation time series
+ """
+
+ MEAN = "Mean"
+ MIN = "Min"
+ MAX = "Max"
+ P10 = "P10"
+ P90 = "P90"
+ P50 = "P50"
+
+
+class SubplotGroupByOptions(str, Enum):
+ """
+ Type definition of options for subplots "group by" in graph for simulation time series
+ """
+
+ VECTOR = "vector"
+ ENSEMBLE = "ensemble"
+
+
+class TraceOptions(str, Enum):
+ """
+ Type definition for trace options in simulation time series
+ """
+
+ HISTORY = "history"
+ OBSERVATIONS = "observations"
+
+
+class VisualizationOptions(str, Enum):
+ """
+ Type definition for visualization options in simulation time series
+ """
+
+ REALIZATIONS = "realizations"
+ STATISTICS = "statistics"
+ FANCHART = "fanchart"
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/__init__.py b/webviz_subsurface/plugins/_simulation_time_series/utils/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/create_vector_traces_utils.py b/webviz_subsurface/plugins/_simulation_time_series/utils/create_vector_traces_utils.py
new file mode 100644
index 000000000..29a9542a3
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/create_vector_traces_utils.py
@@ -0,0 +1,362 @@
+from typing import Any, Dict, List, Optional
+
+import numpy as np
+import pandas as pd
+
+from webviz_subsurface._providers import Frequency
+from webviz_subsurface._utils.fanchart_plotting import (
+ FanchartData,
+ FreeLineData,
+ LowHighData,
+ MinMaxData,
+ get_fanchart_traces,
+)
+from webviz_subsurface._utils.statistics_plotting import (
+ LineData,
+ StatisticsData,
+ create_statistics_traces,
+)
+
+from ..types import FanchartOptions, StatisticsOptions
+from ..utils.from_timeseries_cumulatives import is_interval_or_average_vector
+
+
+def create_vector_observation_traces(
+ vector_observations: dict, vector_name: Optional[str] = None
+) -> List[dict]:
+ """Create list of observations traces from vector observations
+
+ `Input:`
+ * vector_observations: dict - Dictionary with observation data for a vector
+ * vector_name: Optional[str] - Name of vector, added as legend group and name if provided
+
+ `Return:`
+ List of marker traces for each observation for vector
+ """
+ observation_traces: List[dict] = []
+
+ for observation in vector_observations.get("observations", []):
+ hovertext = observation.get("comment", "")
+ hovertemplate = (
+ "(%{x}, %{y})
" + hovertext if hovertext else "(%{x}, %{y})
"
+ )
+ trace = {
+ "name": "Observation",
+ "x": [observation.get("date"), []],
+ "y": [observation.get("value"), []],
+ "marker": {"color": "black"},
+ "hovertemplate": hovertemplate,
+ "showlegend": False,
+ "error_y": {
+ "type": "data",
+ "array": [observation.get("error"), []],
+ "visible": True,
+ },
+ }
+ if vector_name:
+ trace["name"] = "Observation: " + vector_name
+ trace["legendgroup"] = vector_name
+
+ observation_traces.append(trace)
+ return observation_traces
+
+
+def create_vector_realization_traces(
+ vector_df: pd.DataFrame,
+ ensemble: str,
+ color: str,
+ legend_group: str,
+ line_shape: str,
+ hovertemplate: str,
+ show_legend: bool = False,
+ legendrank: Optional[int] = None,
+) -> List[dict]:
+ """Renders line trace for each realization, includes history line if present
+
+ `Input:`
+ * vector_df: pd.DataFrame - Dataframe with vector data with following columns:\n
+ ["DATE", "REAL", vector]
+
+ * ensemble: str - Name of ensemble
+ * color: str - color for traces
+ * legend_group: str - legend group owner
+ * line_shape: str - specified line shape for trace
+ * show_legend: bool - show legend when true, otherwise do not show
+ * hovertemplate: str - template for hovering of data points in trace lines
+ * legendrank: int - rank value for legend in figure
+ """
+ vector_names = list(set(vector_df.columns) ^ set(["DATE", "REAL"]))
+ if len(vector_names) != 1:
+ raise ValueError(
+ f"Expected one vector column present in dataframe, got {len(vector_names)}!"
+ )
+
+ vector_name = vector_names[0]
+ return [
+ {
+ "line": {"shape": line_shape},
+ "x": list(real_df["DATE"]),
+ "y": list(real_df[vector_name]),
+ "hovertemplate": f"{hovertemplate}Realization: {real}, Ensemble: {ensemble}",
+ "name": legend_group,
+ "legendgroup": legend_group,
+ "marker": {"color": color},
+ "legendrank": legendrank,
+ "showlegend": real_no == 0 and show_legend,
+ }
+ for real_no, (real, real_df) in enumerate(vector_df.groupby("REAL"))
+ ]
+
+
+def validate_vector_statistics_df_columns(
+ vector_statistics_df: pd.DataFrame,
+) -> None:
+ """Validate columns of vector statistics DataFrame
+
+ Verify DataFrame columns: ["DATE", MEAN, MIN, MAX, P10, P90, P50]
+
+ Raise value error if columns are not matching
+
+ `Input:`
+ * vector_statistics_df: pd.Dataframe - Dataframe with dates and vector statistics columns.
+ """
+ expected_columns = [
+ "DATE",
+ StatisticsOptions.MEAN,
+ StatisticsOptions.MIN,
+ StatisticsOptions.MAX,
+ StatisticsOptions.P10,
+ StatisticsOptions.P90,
+ StatisticsOptions.P50,
+ ]
+ if list(vector_statistics_df.columns) != expected_columns:
+ raise ValueError(
+ f"Incorrect dataframe columns, expected {expected_columns}, got "
+ f"{vector_statistics_df.columns}"
+ )
+
+
+def create_vector_statistics_traces(
+ vector_statistics_df: pd.DataFrame,
+ statistics_options: List[StatisticsOptions],
+ color: str,
+ legend_group: str,
+ line_shape: str,
+ hovertemplate: str = "(%{x}, %{y})
",
+ show_legend: bool = False,
+ legendrank: Optional[int] = None,
+) -> List[Dict[str, Any]]:
+ """Get statistical lines for provided vector statistics DataFrame.
+
+ `Input:`
+ * vector_statistics_df: pd.Dataframe - Dataframe with dates and statistics columns
+ for specific vector:\n
+ DataFrame columns: ["DATE", MEAN, MIN, MAX, P10, P90, P50]
+
+ * statistics_options: List[StatisticsOptions] - List of statistic options to include
+ * color: str - color for traces
+ * legend_group: str - legend group owner
+ * line_shape: str - specified line shape for trace
+ * hovertemplate: str - template for hovering of data points in trace lines
+ * show_legend: bool - show legend when true, otherwise do not show
+ * legendrank: int - rank value for legend in figure
+ """
+ # Validate columns format
+ validate_vector_statistics_df_columns(vector_statistics_df)
+
+ low_data = (
+ LineData(
+ data=vector_statistics_df[StatisticsOptions.P90].values,
+ name=StatisticsOptions.P90.value,
+ )
+ if StatisticsOptions.P90 in statistics_options
+ else None
+ )
+ mid_data = (
+ LineData(
+ data=vector_statistics_df[StatisticsOptions.P50].values,
+ name=StatisticsOptions.P50.value,
+ )
+ if StatisticsOptions.P50 in statistics_options
+ else None
+ )
+ high_data = (
+ LineData(
+ data=vector_statistics_df[StatisticsOptions.P10].values,
+ name=StatisticsOptions.P10.value,
+ )
+ if StatisticsOptions.P10 in statistics_options
+ else None
+ )
+ mean_data = (
+ LineData(
+ data=vector_statistics_df[StatisticsOptions.MEAN].values,
+ name=StatisticsOptions.MEAN.value,
+ )
+ if StatisticsOptions.MEAN in statistics_options
+ else None
+ )
+ minimum = (
+ vector_statistics_df[StatisticsOptions.MIN].values
+ if StatisticsOptions.MIN in statistics_options
+ else None
+ )
+ maximum = (
+ vector_statistics_df[StatisticsOptions.MAX].values
+ if StatisticsOptions.MAX in statistics_options
+ else None
+ )
+
+ data = StatisticsData(
+ samples=vector_statistics_df["DATE"].values,
+ free_line=mean_data,
+ minimum=minimum,
+ maximum=maximum,
+ low=low_data,
+ mid=mid_data,
+ high=high_data,
+ )
+ return create_statistics_traces(
+ data=data,
+ color=color,
+ legend_group=legend_group,
+ line_shape=line_shape,
+ show_legend=show_legend,
+ hovertemplate=hovertemplate,
+ legendrank=legendrank,
+ )
+
+
+def create_vector_fanchart_traces(
+ vector_statistics_df: pd.DataFrame,
+ fanchart_options: List[FanchartOptions],
+ color: str,
+ legend_group: str,
+ line_shape: str,
+ hovertemplate: str = "(%{x}, %{y})
",
+ show_legend: bool = False,
+ legendrank: Optional[int] = None,
+) -> List[Dict[str, Any]]:
+ """Get statistical fanchart traces for provided vector statistics DataFrame.
+
+ `Input:`
+ * vector_statistics_df: pd.Dataframe - Dataframe with dates and statistics columns
+ for specific vector:\n
+ DataFrame columns: ["DATE", MEAN, MIN, MAX, P10, P90, P50]
+
+ * fanchart_options: List[FanchartOptions] - List of fanchart options to include
+ * color: str - color for traces and fill
+ * legend_group: str - legend group owner
+ * line_shape: str - specified line shape for trace
+ * hovertemplate: str - template for hovering of data points in trace lines
+ * show_legend: bool - show legend when true, otherwise do not show
+ * legendrank: int - rank value for legend in figure
+ """
+ # Validate columns format
+ validate_vector_statistics_df_columns(vector_statistics_df)
+
+ low_high_data = (
+ LowHighData(
+ low_data=vector_statistics_df[StatisticsOptions.P90].values,
+ low_name="P90",
+ high_data=vector_statistics_df[StatisticsOptions.P10].values,
+ high_name="P10",
+ )
+ if FanchartOptions.P10_P90 in fanchart_options
+ else None
+ )
+ minimum_maximum_data = (
+ MinMaxData(
+ minimum=vector_statistics_df[StatisticsOptions.MIN].values,
+ maximum=vector_statistics_df[StatisticsOptions.MAX].values,
+ )
+ if FanchartOptions.MIN_MAX in fanchart_options
+ else None
+ )
+ mean_data = (
+ FreeLineData(
+ "Mean",
+ vector_statistics_df[StatisticsOptions.MEAN].values,
+ )
+ if FanchartOptions.MEAN in fanchart_options
+ else None
+ )
+
+ data = FanchartData(
+ samples=vector_statistics_df["DATE"].tolist(),
+ low_high=low_high_data,
+ minimum_maximum=minimum_maximum_data,
+ free_line=mean_data,
+ )
+ return get_fanchart_traces(
+ data=data,
+ color=color,
+ legend_group=legend_group,
+ line_shape=line_shape,
+ show_legend=show_legend,
+ hovertemplate=hovertemplate,
+ legendrank=legendrank,
+ )
+
+
+def create_history_vector_trace(
+ samples: list,
+ history_data: np.ndarray,
+ line_shape: str,
+ color: str = "black",
+ vector_name: Optional[str] = None,
+ show_legend: bool = False,
+ legendrank: Optional[int] = None,
+) -> dict:
+ """Returns the history data as trace line
+
+ `Input:`
+ * samples: list - list of samples
+ * history_data: np.ndarray - 1D np.array of history data
+ * line_shape: str - specified line shape
+ * color: str - line color
+ * vector_name: Optional[str] - Name of vector, appended to hovertext if provided
+ * show_legend: bool - show legend when true, otherwise do not show
+
+ `Return:`
+ Trace line for provided history data. Raise value error if number of samples
+ and number of history data points does not match.
+ """
+ if len(samples) != len(history_data):
+ raise ValueError("Number of samples unequal number of data points!")
+
+ hovertext = "History" if vector_name is None else "History: " + vector_name
+
+ return {
+ "line": {"shape": line_shape},
+ "x": samples,
+ "y": history_data,
+ "hovertext": hovertext,
+ "hoverinfo": "y+x+text",
+ "name": "History",
+ "marker": {"color": color},
+ "showlegend": show_legend,
+ "legendgroup": "History",
+ "legendrank": legendrank,
+ }
+
+
+def render_hovertemplate(vector: str, sampling_frequency: Optional[Frequency]) -> str:
+ """Based on render_hovertemplate(vector: str, interval: Optional[str]) in
+ webviz_subsurface/_utils/simulation_timeseries.py
+
+ Adjusted to use Frequency enum and handle "Raw" and "weekly" frequency.
+
+ `Input:`
+ * vector: str - name of vector
+ * sampling_frequency: Optional[Frequency] - sampling frequency for hovering data info
+ """
+ if is_interval_or_average_vector(vector) and sampling_frequency:
+ if sampling_frequency in [Frequency.DAILY, Frequency.WEEKLY]:
+ return "(%{x|%b} %{x|%-d}, %{x|%Y}, %{y})
"
+ if sampling_frequency == Frequency.MONTHLY:
+ return "(%{x|%b} %{x|%Y}, %{y})
"
+ if sampling_frequency == Frequency.YEARLY:
+ return "(%{x|%Y}, %{y})
"
+ raise ValueError(f"Interval {sampling_frequency.value} is not supported.")
+ return "(%{x}, %{y})
" # Plotly's default behavior
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/delta_ensemble_utils.py b/webviz_subsurface/plugins/_simulation_time_series/utils/delta_ensemble_utils.py
new file mode 100644
index 000000000..1463cbf75
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/delta_ensemble_utils.py
@@ -0,0 +1,69 @@
+from typing import Dict, List, Tuple
+
+from webviz_subsurface._providers import EnsembleSummaryProvider
+
+from ..types import DeltaEnsemble, ProviderSet
+
+
+def create_delta_ensemble_name(delta_ensemble: DeltaEnsemble) -> str:
+ """Create delta ensemble name from delta ensemble"""
+ name_a = delta_ensemble["ensemble_a"]
+ name_b = delta_ensemble["ensemble_b"]
+ return f"({name_a})-({name_b})"
+
+
+def create_delta_ensemble_names(delta_ensembles: List[DeltaEnsemble]) -> List[str]:
+ """Create list of delta ensemble names form list of delta ensembles"""
+ return [
+ create_delta_ensemble_name(delta_ensemble) for delta_ensemble in delta_ensembles
+ ]
+
+
+def create_delta_ensemble_name_dict(
+ delta_ensembles: List[DeltaEnsemble],
+) -> Dict[str, DeltaEnsemble]:
+ """Create dictionary with delta ensemble name as key and and corresponding delta ensemble
+ as value, from list if delta ensembles"""
+ return {
+ create_delta_ensemble_name(delta_ensemble): delta_ensemble
+ for delta_ensemble in delta_ensembles
+ }
+
+
+def is_delta_ensemble_providers_in_provider_set(
+ delta_ensemble: DeltaEnsemble, provider_set: ProviderSet
+) -> bool:
+ """Check if the delta ensemble providers exist in provider set
+
+ `Returns:`
+ * True if name of ensemble A and ensemble B exist among provider set names,
+ false otherwise
+ """
+ return (
+ delta_ensemble["ensemble_a"] in provider_set.names()
+ and delta_ensemble["ensemble_b"] in provider_set.names()
+ )
+
+
+def create_delta_ensemble_provider_pair(
+ delta_ensemble: DeltaEnsemble, provider_set: ProviderSet
+) -> Tuple[EnsembleSummaryProvider, EnsembleSummaryProvider]:
+ """Create pair of providers representing a delta ensemble
+
+ `Return:`
+ * Return Tuple with provider for ensemble A and ensemble B in a delta ensemble
+ retrieved from provider set. If one or more provider does not exist, exception
+ is raised!
+ """
+
+ ensemble_a = delta_ensemble["ensemble_a"]
+ ensemble_b = delta_ensemble["ensemble_b"]
+ if not is_delta_ensemble_providers_in_provider_set(delta_ensemble, provider_set):
+ provider_names = provider_set.names()
+ raise ValueError(
+ f"Request delta ensemble with ensemble {ensemble_a}"
+ f" and ensemble {ensemble_b}. Ensemble {ensemble_a} exists: "
+ f"{ensemble_a in provider_names}, ensemble {ensemble_b} exists: "
+ f"{ensemble_b in provider_names}."
+ )
+ return (provider_set.provider(ensemble_a), provider_set.provider(ensemble_b))
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/derived_ensemble_vectors_accessor_utils.py b/webviz_subsurface/plugins/_simulation_time_series/utils/derived_ensemble_vectors_accessor_utils.py
new file mode 100644
index 000000000..c9b529e33
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/derived_ensemble_vectors_accessor_utils.py
@@ -0,0 +1,86 @@
+from typing import Dict, List, Optional
+
+from webviz_subsurface_components import ExpressionInfo
+
+from webviz_subsurface._providers import Frequency
+
+from ..types import (
+ DeltaEnsemble,
+ DerivedDeltaEnsembleVectorsAccessorImpl,
+ DerivedEnsembleVectorsAccessorImpl,
+ DerivedVectorsAccessor,
+ ProviderSet,
+)
+from .delta_ensemble_utils import (
+ create_delta_ensemble_name_dict,
+ create_delta_ensemble_provider_pair,
+ is_delta_ensemble_providers_in_provider_set,
+)
+
+
+def create_derived_vectors_accessor_dict(
+ ensembles: List[str],
+ vectors: List[str],
+ provider_set: ProviderSet,
+ expressions: List[ExpressionInfo],
+ delta_ensembles: List[DeltaEnsemble],
+ resampling_frequency: Optional[Frequency],
+) -> Dict[str, DerivedVectorsAccessor]:
+ """Create dictionary with ensemble name as key and derived vectors accessor
+ as key.
+
+ Obtain iterable object with ensemble name and corresponding vector data accessor.
+
+ Creates derived vectors accessor based on ensemble type: Single ensemble or
+ Delta ensemble.
+
+ The derived vectors are based on listed vectors and created expressions.
+
+ `Input:`
+ * ensembles: List[str] - list of ensemble names
+ * vectors List[str] - list of vectors to create accessess for
+ * provider_set: ProviderSet - set of EnsembleSummaryProviders to obtain vector data
+ * expressions: List[ExpressionInfo] - list of expressions for calculating vectors
+ * delta_ensembles: List[DeltaEnsemble] - list of created delta ensembles
+ * resampling_frequency: Optional[Frequency] - Resampling frequency setting for
+ EnsembleSummaryProviders
+
+ `Return:`
+ * Dict[str, DerivedVectorsAccessor] - dictionary with ensemble name as key and
+ DerivedVectorsAccessor implementations based on ensemble type - single ensemble
+ or delta ensemble.
+
+ TODO: Consider as a factory?
+ """
+ ensemble_data_accessor_dict: Dict[str, DerivedVectorsAccessor] = {}
+ delta_ensemble_name_dict = create_delta_ensemble_name_dict(delta_ensembles)
+ provider_names = provider_set.names()
+ for ensemble in ensembles:
+ if ensemble in provider_names:
+ ensemble_data_accessor_dict[ensemble] = DerivedEnsembleVectorsAccessorImpl(
+ ensemble,
+ provider_set.provider(ensemble),
+ vectors,
+ expressions,
+ resampling_frequency,
+ )
+ elif (
+ ensemble in delta_ensemble_name_dict.keys()
+ and is_delta_ensemble_providers_in_provider_set(
+ delta_ensemble_name_dict[ensemble], provider_set
+ )
+ ):
+ provider_pair = create_delta_ensemble_provider_pair(
+ delta_ensemble_name_dict[ensemble], provider_set
+ )
+ ensemble_data_accessor_dict[
+ ensemble
+ ] = DerivedDeltaEnsembleVectorsAccessorImpl(
+ name=ensemble,
+ provider_pair=provider_pair,
+ vectors=vectors,
+ expressions=expressions,
+ resampling_frequency=resampling_frequency,
+ )
+
+ return ensemble_data_accessor_dict
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/from_timeseries_cumulatives.py b/webviz_subsurface/plugins/_simulation_time_series/utils/from_timeseries_cumulatives.py
new file mode 100644
index 000000000..e5511fde4
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/from_timeseries_cumulatives.py
@@ -0,0 +1,143 @@
+import numpy as np
+import pandas as pd
+
+###################################################################################
+# NOTE: This code is a copy and modification of
+# webviz_subsurface/_datainput/from_timeseries_cumulatives.py for usage with
+# EnsembleSummaryProvider functionality
+#
+# Renaming is performed for clarity
+#
+# Some additional functions are added.
+###################################################################################
+
+
+def is_interval_or_average_vector(vector: str) -> bool:
+ return vector.startswith("AVG_") or vector.startswith("INTVL_")
+
+
+def get_cumulative_vector_name(vector: str) -> str:
+ if not is_interval_or_average_vector(vector):
+ raise ValueError(
+ f'Expected "{vector}" to be a vector calculated from cumulative!'
+ )
+
+ if vector.startswith("AVG_"):
+ return f"{vector[4:7] + vector[7:].replace('R', 'T', 1)}"
+ if vector.startswith("INTVL_"):
+ return vector.lstrip("INTVL_")
+ raise ValueError(f"Expected {vector} to be a cumulative vector!")
+
+
+def calculate_from_resampled_cumulative_vectors_df(
+ vectors_df: pd.DataFrame,
+ as_rate_per_day: bool,
+) -> pd.DataFrame:
+ """
+ Calculates interval delta or average rate data for vector columns in provided dataframe.
+ This function assumes data is already resampled when retrieved with ensemble summary
+ provider.
+
+ Date sampling should be according to Frequency enum in
+ webviz_subsurface/_providers/ensemble_summary_provider.py.
+ Does not handle raw data and varying time delta less than daily!
+
+ `INPUT:`
+ * vectors_df: pd.Dataframe - Dataframe with columns:
+ ["DATE", "REAL", vector1, ..., vectorN]
+
+ `NOTE:`
+ - Does not handle raw data format with varying sampling or sampling frequency higher than
+ daily!
+ - Dataframe has columns:\n
+ "DATE": Series with dates on datetime.datetime format
+ "REAL": Series of realization number identifier
+ vector1, ..., vectorN: Series of vector data for vector of given column name
+
+ `TODO:`
+ * IMPROVE FUNCTION NAME?
+ * Handle raw data format?
+ * Give e.g. a dict with info of "avg and intvl" calculation for each vector column?
+ Can thereby calculate everything for provided vector columns and no iterate column per
+ column?
+ """
+ vectors_df = vectors_df.copy()
+
+ column_keys = list(set(vectors_df.columns) ^ set(["DATE", "REAL"]))
+
+ # Sort by realizations, thereafter dates
+ vectors_df.sort_values(by=["REAL", "DATE"], inplace=True)
+
+ # Create column of unique id for realizations. .diff() takes diff between an index
+ # and previous index in a column. Thereby if "realuid" is != 0, the .diff() is
+ # between two realizations.
+ # Could alternatively loop over ensembles and realizations, but this is quicker for
+ # larger datasets.
+ vectors_df["realuid"] = vectors_df["REAL"]
+
+ vectors_df.set_index(["REAL", "DATE"], inplace=True)
+
+ # Resample on DATE frequency
+ vectors_df.reset_index(level=["REAL"], inplace=True)
+
+ cumulative_name_map = {
+ vector: rename_vector_from_cumulative(vector, as_rate_per_day)
+ for vector in column_keys
+ }
+ cumulative_vectors = list(cumulative_name_map.values())
+
+ # Take diff of given column_keys indexes - preserve REAL
+ cumulative_vectors_df = pd.concat(
+ [
+ vectors_df[["REAL"]],
+ vectors_df[["realuid"] + column_keys]
+ .diff()
+ .shift(-1)
+ .rename(
+ mapper=cumulative_name_map,
+ axis=1,
+ ),
+ ],
+ axis=1,
+ )
+ cumulative_vectors_df[cumulative_vectors] = cumulative_vectors_df[
+ cumulative_vectors
+ ].fillna(value=0)
+
+ # Reset index (DATE becomes regular column)
+ cumulative_vectors_df.reset_index(inplace=True)
+
+ # Convert interval cumulative to daily average rate if requested
+ if as_rate_per_day:
+ days = cumulative_vectors_df["DATE"].diff().shift(-1).dt.days.fillna(value=0)
+ for vector in column_keys:
+ with np.errstate(invalid="ignore"):
+ cumulative_vector_name = cumulative_name_map[vector]
+ cumulative_vectors_df.loc[:, cumulative_vector_name] = (
+ cumulative_vectors_df[cumulative_vector_name].values / days.values
+ )
+
+ # Find .diff() between two realizations and set value = 0
+ cumulative_vectors_df.loc[
+ cumulative_vectors_df["realuid"] != 0, cumulative_vectors
+ ] = 0
+ cumulative_vectors_df.drop("realuid", axis=1, inplace=True)
+
+ return cumulative_vectors_df
+
+
+def rename_vector_from_cumulative(vector: str, as_rate: bool) -> str:
+ """This function assumes that it is a cumulative/total vector named in the Eclipse standard
+ and is fairly naive when converting to rate. Based in the list in libecl
+ https://github.com/equinor/libecl/blob/69f1ee0ddf696c87b6d85eca37eed7e8b66ac2db/\
+ lib/ecl/smspec_node.cpp#L531-L586
+ the T identifying total/cumulative should not occur before letter 4,
+ as all the listed strings are prefixed with one or two letters in the vectors.
+ Therefore starting the replace at the position 3 (4th letter) to reduce risk of errors
+ in the conversion to rate naming, but hard to be completely safe.
+ """
+ return (
+ f"AVG_{vector[0:3] + vector[3:].replace('T', 'R', 1)}"
+ if as_rate
+ else f"INTVL_{vector}"
+ )
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/history_vectors.py b/webviz_subsurface/plugins/_simulation_time_series/utils/history_vectors.py
new file mode 100644
index 000000000..5501a6e9d
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/history_vectors.py
@@ -0,0 +1,66 @@
+from typing import Dict, List, Optional
+
+import pandas as pd
+
+from webviz_subsurface._abbreviations.reservoir_simulation import historical_vector
+from webviz_subsurface._providers import EnsembleSummaryProvider, Frequency
+
+
+def create_history_vectors_df(
+ provider: EnsembleSummaryProvider,
+ vector_names: List[str],
+ resampling_frequency: Optional[Frequency],
+) -> pd.DataFrame:
+ """Get dataframe with existing historical vector data for provided vectors.
+
+ The returned dataframe contains columns with name of vector and corresponding historical
+ data
+
+ `Input:`
+ * ensemble: str - Ensemble name
+ * vector_names: List[str] - list of vectors to get historical data for
+ [vector1, ... , vectorN]
+
+ `Output:`
+ * dataframe with non-historical vector names in columns and their historical data in rows.
+ `Columns` in dataframe: ["DATE", "REAL", vector1, ..., vectorN]
+
+ ---------------------
+ `NOTE:`
+ * Raise ValueError if vector does not exist for ensemble
+ * If historical data does not exist for provided vector, vector is excluded from
+ the returned dataframe.
+ * Column names are not the historical vector name, but the original vector name,
+ i.e. `WOPTH:OP_1` data is placed in colum with name `WOPT:OP_1`
+ """
+ if len(vector_names) < 1:
+ raise ValueError("Empty list of vector names!")
+
+ provider_vectors = provider.vector_names()
+ resampling_frequency = (
+ resampling_frequency if provider.supports_resampling() else None
+ )
+
+ # Verify for provider
+ for elm in vector_names:
+ if elm not in provider_vectors:
+ raise ValueError(f'Vector "{elm}" not present among vectors for provider')
+
+ # Dict with historical vector name as key, and non-historical vector name as value
+ historical_vector_and_vector_name_dict: Dict[str, str] = {}
+ for vector in vector_names:
+ # TODO: Create new historical_vector according to new provider metadata?
+ historical_vector_name = historical_vector(vector=vector, smry_meta=None)
+ if historical_vector_name and historical_vector_name in provider.vector_names():
+ historical_vector_and_vector_name_dict[historical_vector_name] = vector
+
+ if not historical_vector_and_vector_name_dict:
+ return pd.DataFrame()
+
+ historical_vector_names = list(historical_vector_and_vector_name_dict.keys())
+
+ # TODO: Ensure realization no 0 is good enough
+ historical_vectors_df = provider.get_vectors_df(
+ historical_vector_names, resampling_frequency, realizations=[0]
+ )
+ return historical_vectors_df.rename(columns=historical_vector_and_vector_name_dict)
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/provider_set_utils.py b/webviz_subsurface/plugins/_simulation_time_series/utils/provider_set_utils.py
new file mode 100644
index 000000000..279299ddc
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/provider_set_utils.py
@@ -0,0 +1,79 @@
+from typing import Dict, List, Optional
+
+from webviz_subsurface._abbreviations.reservoir_simulation import (
+ simulation_unit_reformat,
+ simulation_vector_description,
+)
+from webviz_subsurface._utils.vector_calculator import (
+ ExpressionInfo,
+ VectorCalculator,
+ get_expression_from_name,
+)
+
+from ..types import ProviderSet
+
+
+def create_vector_plot_titles_from_provider_set(
+ vector_names: List[str],
+ expressions: List[ExpressionInfo],
+ provider_set: ProviderSet,
+) -> Dict[str, str]:
+ """Create plot titles for vectors
+
+ Create plot titles for vectors by use of provider set metadata and list of
+ calculation expressions
+
+ `Return:`
+ * Dictionary with vector names as keys and the corresponding title as value
+ """
+ vector_title_dict: Dict[str, str] = {}
+
+ all_vector_names = provider_set.all_vector_names()
+ for vector_name in vector_names:
+ vector = vector_name
+
+ if vector.startswith("AVG_"):
+ vector = vector.lstrip("AVG_")
+ if vector.startswith("INTVL_"):
+ vector = vector.lstrip("INTVL_")
+
+ if vector in all_vector_names:
+ metadata = provider_set.vector_metadata(vector)
+ title = simulation_vector_description(vector_name)
+ if metadata and metadata.unit:
+ title = (
+ f"{simulation_vector_description(vector_name)}"
+ f" [{simulation_unit_reformat(metadata.unit)}]"
+ )
+ vector_title_dict[vector_name] = title
+ else:
+ expression = get_expression_from_name(vector_name, expressions)
+ if expression:
+ unit = create_calculated_unit_from_provider_set(
+ expression, provider_set
+ )
+ if unit:
+ # TODO: Expression description instead of vector name in title?
+ vector_title_dict[vector_name] = f"{vector_name} [{unit}]"
+ else:
+ vector_title_dict[vector_name] = vector_name
+ else:
+ vector_title_dict[vector_name] = vector_name
+ return vector_title_dict
+
+
+def create_calculated_unit_from_provider_set(
+ expression: ExpressionInfo, provider_set: ProviderSet
+) -> Optional[str]:
+ try:
+ # Parse only for validation
+ VectorCalculator.parser.parse(expression["expression"])
+ unit_expr: str = expression["expression"]
+ for elm in expression["variableVectorMap"]:
+ metadata = provider_set.vector_metadata(elm["vectorName"][0])
+ if metadata and metadata.unit:
+ unit_expr = unit_expr.replace(elm["variableName"], metadata.unit)
+
+ return unit_expr
+ except ValueError:
+ return None
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/trace_line_shape.py b/webviz_subsurface/plugins/_simulation_time_series/utils/trace_line_shape.py
new file mode 100644
index 000000000..d1901daf0
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/trace_line_shape.py
@@ -0,0 +1,25 @@
+from typing import Optional
+
+from webviz_subsurface._providers import VectorMetadata
+
+from .from_timeseries_cumulatives import is_interval_or_average_vector
+
+
+def get_simulation_line_shape(
+ line_shape_fallback: str,
+ vector: str,
+ vector_metadata: Optional[VectorMetadata] = None,
+) -> str:
+ """Get simulation time series line shape based on vector metadata"""
+ if is_interval_or_average_vector(vector):
+ # These custom calculated vectors are valid forwards in time.
+ return "hv"
+
+ if vector_metadata is None:
+ return line_shape_fallback
+ if vector_metadata.is_rate:
+ # Eclipse rate vectors are valid backwards in time.
+ return "vh"
+ if vector_metadata.is_total:
+ return "linear"
+ return line_shape_fallback
diff --git a/webviz_subsurface/plugins/_simulation_time_series/utils/vector_statistics.py b/webviz_subsurface/plugins/_simulation_time_series/utils/vector_statistics.py
new file mode 100644
index 000000000..dcd9e0fac
--- /dev/null
+++ b/webviz_subsurface/plugins/_simulation_time_series/utils/vector_statistics.py
@@ -0,0 +1,58 @@
+from typing import List
+
+import numpy as np
+import pandas as pd
+
+from ..types import StatisticsOptions
+
+
+def create_vectors_statistics_df(vectors_df: pd.DataFrame) -> pd.DataFrame:
+ """
+ Create vectors statistics dataframe for given vectors in columns of provided vectors dataframe
+
+ Calculate min, max, mean, p10, p90 and p50 for each vector in dataframe column
+
+ `Input:`
+ * vectors_df: pd.DataFrame - Dataframe with vectors dataframe and columns:
+ ["DATE", "REAL", vector1, ... , vectorN]
+
+ `Returns:`
+ * Dataframe with double column level:\n
+ [ "DATE", vector1, ... vectorN
+ MEAN, MIN, MAX, P10, P90, P50 ... MEAN, MIN, MAX, P10, P90, P50]
+ """
+ # Get vectors names, keep order
+ columns_list = list(vectors_df.columns)
+ vector_names = sorted(
+ (set(columns_list) ^ set(["DATE", "REAL"])), key=columns_list.index
+ )
+
+ # Invert p10 and p90 due to oil industry convention.
+ def p10(x: List[float]) -> List[float]:
+ return np.nanpercentile(x, q=90)
+
+ def p90(x: List[float]) -> List[float]:
+ return np.nanpercentile(x, q=10)
+
+ def p50(x: List[float]) -> List[float]:
+ return np.nanpercentile(x, q=50)
+
+ statistics_df: pd.DataFrame = (
+ vectors_df[["DATE"] + vector_names]
+ .groupby(["DATE"])
+ .agg([np.nanmean, np.nanmin, np.nanmax, p10, p90, p50])
+ .reset_index(level=["DATE"], col_level=0)
+ )
+
+ # Rename nanmin, nanmax and nanmean to min, max and mean.
+ col_stat_label_map = {
+ "nanmin": StatisticsOptions.MIN,
+ "nanmax": StatisticsOptions.MAX,
+ "nanmean": StatisticsOptions.MEAN,
+ "p10": StatisticsOptions.P10,
+ "p90": StatisticsOptions.P90,
+ "p50": StatisticsOptions.P50,
+ }
+ statistics_df.rename(columns=col_stat_label_map, level=1, inplace=True)
+
+ return statistics_df