Skip to content

Commit

Permalink
minor fixes in learning documentation and inclusion of more detailed …
Browse files Browse the repository at this point in the history
…example description table + deleted unnecessary learning example
  • Loading branch information
kim-mskw committed Nov 10, 2023
1 parent 53c7563 commit 1dc3622
Show file tree
Hide file tree
Showing 13 changed files with 28 additions and 35,165 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ SPDX-License-Identifier: AGPL-3.0-or-later
[![](https://img.shields.io/pypi/status/assume-framework.svg)](https://pypi.org/pypi/assume-framework/)
[![](https://img.shields.io/readthedocs/assume)](https://assume.readthedocs.io/)

[![Open Tutorials In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing)
[![Open Learning Tutorial](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing)

**ASSUME** is an open-source toolbox for agent-based simulations of European electricity markets, with a primary focus on the German market setup. Developed as an open-source model, its primary objectives are to ensure usability and customizability for a wide range of users and use cases in the energy system modeling community.

Expand Down Expand Up @@ -89,7 +89,7 @@ How to configure a new unit in ASSUME?

How to intorduce reinforcement learning to ASSUME?

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing)
[![Open Learning Tutorial](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing)



Expand Down
22 changes: 18 additions & 4 deletions docs/source/example_simulations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Example Simulations
=====================

While the modeller can define her own input data for the simulation, we provide some example simulations to get started.
Here you can find an overview of the different exampels provided.
Here you can find an overview of the different exampels provided. Below you find an exhaustive table explaining the different examples.


============================= ============================= =====================================================
Expand All @@ -17,7 +17,21 @@ Here you can find an overview of the different exampels provided.
small_with_opt_clearing example_01a Small simulation with optimization clearing instead of pay_as_clear.
small_with_BB example_01e Small Simulation with Block Bids and complex clearing.
small_with_vre example_01b Small simulation with variable renewable energy.
example_01c A small study with CRM markets
learning_small example_02a A small study with roughly 10 powerplants, where one powerplant is equiped with a learning bidding strategy and can learn to exert market power.
learning_medium example_02b A small study with roughly 10 powerplants, where multiple powerplants are equiped with a learning bidding strategy and learn that they do not have market power anymore.
small_learning_1 example_02a A small study with roughly 10 powerplants, where one powerplant is equiped with a learning bidding strategy and can learn to exert market power.
small_learning_2 example_02b A small study with roughly 10 powerplants, where multiple powerplants are equiped with a learning bidding strategy and learn that they do not have market power anymore.
============================= ============================= =====================================================

The following table categorizes the different provided examples in a more detailed manner. We included the main features of ASSUME in the table.


============================== =============== =============== =================== ====================== ============= ============= ================= ============== =============
example name Country Generation Tech Generation Volume Demand Tech Demand Volume Markets Bidding Strategy Grid Further Infos
============================== =============== =============== =================== ====================== ============= ============= ================= ============== =============
small_learning_1 Germany conventional 12,500 MW fixed inflexible 1,000,000 MW EoM Learning, Naive No Resembles Case 1 from Harder et.al. 2023
small_learning_2 Germany conventional 12,500 MW fixed inflexible 1,000,000 MW EoM Learning, Naive No Resembles Case 2 from Harder et.al. 2023
============================== =============== =============== =================== ====================== ============= ============= ================= ============== =============


References
-----------
Harder, Nick & Qussous, Ramiz & Weidlich, Anke. (2023). Fit for purpose: Modeling wholesale electricity markets realistically with multi-agent deep reinforcement learning. Energy and AI. 14. 100295. 10.1016/j.egyai.2023.100295.
9 changes: 6 additions & 3 deletions docs/source/learning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,10 @@ Reinforcement Learning
One unique characteristic of ASSUME is the usage of Reinforcement Learning (RL) for the bidding of the agents.
To enable this the architecture of the simulation is designed in a way to accommodate the learning process. In this part of
the documentation, we give a short introduction to reinforcement learning in general and then pinpoint you to the
relevant parts of the code. If you want a hands-on introduction check out the prepared tutorial in Colab: https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing
relevant parts of the code. the descriptions are mostly based on the following paper
Harder, Nick & Qussous, Ramiz & Weidlich, Anke. (2023). Fit for purpose: Modeling wholesale electricity markets realistically with multi-agent deep reinforcement learning. Energy and AI. 14. 100295. 10.1016/j.egyai.2023.100295.

If you want a hands-on introduction check out the prepared tutorial in Colab: https://colab.research.google.com/drive/1LISiM1QvDIMXU68pJH-NqrMw5w7Awb24?usp=sharing


The Basics of Reinforcement Learning
Expand Down Expand Up @@ -81,10 +84,10 @@ where L is the loss function.

The actor and critic networks are trained simultaneously using the actor-critic algorithm, which updates the weights of
both networks at each time step. The actor-critic algorithm is a form of policy iteration, where the policy is updated based on the
estimated value function, and the value function is updated based on the.
estimated value function, and the value function is updated based on the critic.


1.2 Multi-Agent Learning
Multi-Agent Learning
------------------------

In a single-agent setup, the state transition and respective reward depend only on the actions of a single agent. However, in a
Expand Down
4 changes: 2 additions & 2 deletions examples/examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@
"scenario": "example_02",
"study_case": "dam_case_2019",
},
"learning_small": {"scenario": "example_02a", "study_case": "base"},
"learning_medium": {"scenario": "example_02b", "study_case": "base"},
"small_learning_1": {"scenario": "example_02a", "study_case": "base"},
"small_learning_2": {"scenario": "example_02b", "study_case": "base"},
}

# %%
Expand Down
92 changes: 0 additions & 92 deletions examples/inputs/example_01_rl/config.yaml

This file was deleted.

Loading

0 comments on commit 1dc3622

Please sign in to comment.