Releases: aimat-lab/gcnn_keras
kgcnn v4.0.1
- Removed unused layers and added manual built in scripts and training functions, since with keras==3.0.5 the pytorch trainer tries to rebuild the model,
even if the model is already built and does it eagerly without proper tensor input, which causes crashes for almost every model in kgcnn. - Fix Error in
ExtensiveMolecularLabelScaler.transform
missing default value. - Added further benchmark results for kgcnn version 4.
- Fix error in
kgcnn.layers.geom.PositionEncodingBasisLayer
- Fix error in
kgcnn.literature.GCN.make_model_weighted
- Fix error in
kgcnn.literature.AttentiveFP.make_model
- Had to change serialization for activation functions since with keras>=3.0.2 custom strings are not allowed also
causing clashes with built-in functions. We catch defaults to be at least as backward compatible as possible and changed to serialization dictionary. Adapted all hyperparameter. - Renamed leaky_relu and swish in
kgcnn.ops.activ
to leaky_relu2 and swish2. - Fix error in jax scatter min/max functions.
- Added
kgcnn.__safe_scatter_max_min_to_zero__
for tensorflow and jax backend scattering with default to True. - Added simple ragged support for loss and metrics.
- Added simple ragged support for
train_force.py
- Implemented random equivariant initialize for PAiNN
- Implemented charge and dipole output for HDNNP2nd
- Implemented jax backend for force models.
- Fix
GraphBatchNormalization
. - Fix error in
kgcnn.io.loader
for unused IDs and graph state input. - Added experimental
DisjointForceMeanAbsoluteError
kgcnn v4.0.0
Completely reworked version of kgcnn for Keras 3.0 and multi-backend support. A lot of fundamental changes have been made.
However, we tried to keep as much of the API from kgcnn 3.0 so that models in literature can be used with minimal changes.
Mainly, the "input_tensor_type"="ragged"
model parameter has to be added if ragged tensors are used as input in tensorflow.
For very few models also the order of inputs had to be changed.
Also note that the input embedding layer requires integer tensor input and does not cast from float anymore.
The scope of models has been reduced for initial release but will be extended in upcoming versions.
Note that some changes are also stem for keras API changes, like for example learning_rate rate parameter or serialization.
Moreover, tensorflow addons had to be dropped for keras 3.0 .
The general representations of graphs has been changed from ragged tensors (tensorflow only, not supported by keras 3.0) to
the disjoint graph representation compatible with e.g. PyTorchGeometric.
Input can be padded or (still) ragged input. Or direct disjoint representations with batch loader.
(See models chapter in docs).
For jax we added a padded_disjoint
parameter that can enable jit'able jax models but requires a dataloader,
which is not yet thoroughly implemented in kgcnn
. For padded samples it can already been tested,
but the padding of each sample is a much larger overhead than padding the batch.
Some other changes:
- Reworked training scripts to have a single
train_graph.py
script. Command line arguments are now optional and just used for verification, all butcategory
which has to select a model/hyperparameter combination from hyper file.
Since the hyperparameter file already contains all necessary information. - Train test indices can now also be set and loaded from the dataset directly.
- Scaler behaviour has changed with regard to
transform_dataset
. Key names of properties to transform has been moved to the constructor!
Also be sure to checkStandardLabelScaler
if you want to scale regression targets, since target properties are default here. - Literature Models have an optional output scaler from new
kgcnn.layers.scale
layer controlled byoutput_scaling
model argument. - Input embedding in literature models is now controlled with separate
input_node_embedding
orinput_edge_embedding
arguments which can be set toNone
for no embedding.
Also embedding input tokens must be of dtype int now. No auto-casting from float anymore. - New module
kgcnn.ops
withkgcnn.backend
to generalize aggregation functions for graph operations. - Reduced the models in literature. Will keep bringing all models of kgcnn<4.0.0 back in next versions and run benchmark training again.
kgcnn v3.1.0
- Added flexible charge for
rdkit_xyz_to_mol
as e.g. list. - Added
from_xyz
toMolecularGraphRDKit
. - Started additional
kgcnn.molecule.preprocessor
module for graph preprocessors. - BREAKING CHANGES: Renamed module
kgcnn.layers.pooling
tokgcnn.layers.aggr
for better compatibility.
However, kept legacy pooling module and all old ALIAS. - Repair bug in
RelationalMLP
. HyperParameter
is not verified on initialize anymore, just callhyper.verify()
.- Moved losses from
kgcnn.metrics.loss
into separate modulkgcnn.losses
to be more compatible with keras. - Reworked training scripts especially to simplify command line arguments and strengthen hyperparameter.
- Started with potential keras-core port. Not yet tested or supported.
- Removed
get_split_indices
to make the graph indexing more consistent. - Started with keras-core integration. Any code is WIP and not tested or working yet.
kgcnn v3.0.2
- Added
add_eps
toPAiNNUpdate
layer as option. - Rework
data.transform.scaler.standard
to hopefully now fix all errors with the scalers. - BREAKING CHANGES: Refactored activation functions
kgcnn.ops.activ
and layerskgcnn.layers.activ
that have trainable parameters, due to keras changes in 2.13.0.
Please check your config, since parameters are ignored in normal functions!
If for example "kgcnn>leaky_relu" you can not change the leak anymore. You must use akgcnn.layers.activ
for that. - Rework
kgcnn.graph.methods.range_neighbour_lattice
to use pymatgen. - Added
PolynomialDecayScheduler
- Added option for force model to use normal gradient and added as option
use_batch_jacobian
. - BREAKING CHANGES: Reworked
kgcnn.layers.gather
to reduce/simplify code and speed up some models.
The behaviour ofGatherNodes
has changed a little in that it first splits and then concatenates. The default parameters now havesplit_axis
andconcat_axis
set to 2.concat_indices
has been removed.
The default behaviour of the layer however stays the same. - An error in layer
FracToRealCoordinates
has been fixed and improved speed.
kgcnn v3.0.1
- Removed deprecated molecules.
- Fix error in
kgcnn.data.transform.scaler.serial
- Fix error in
QMDataset
if attributes have been chosen. Nowset_attributes
does not cause an error. - Fix error in
QMDataset
with labels without SDF file. - Fix error in
kgcnn.layers.conv.GraphSageNodeLayer
. - Add
reverse_edge_indices
option toGraphDict.from_networkx
. Fixed error in connection withkgcnn.crystal
. - Started with
kgcnn.io.file
. Experimental. Will get more updates. - Fix error with
StandardLabelScaler
inheritance. - Added workflow notebook examples.
- Fix error in import
kgcnn.crystal.periodic_table
to now properly include package data.
kgcnn v3.0.0
Major refactoring of kgcnn layers and models.
We try to provide the most important layers for graph convolution as kgcnn.layers
with ragged tensor representation.
As for literature models only input and output is matched with kgcnn
.
- Move
kgcnn.layers.conv
tokgcnn.literature
. - Refactored all graph methods in
graph.methods
. - Moved
kgcnn.mol.*
andkgcnn.moldyn.*
intokgcnn.molecule
- Moved
hyper
intotrainig
- Updated
crystal
.
kgcnn v2.2.4
- Added
ACSFConstNormalization
to literature models as option. - Adjusted and reworked
MLP
. Now includes more normalization options. - Removed 'is_sorted', 'node_indexing' and 'has_unconnected' from
GraphBaseLayer
and added it to the pooling layers directly.
kgcnn v2.2.3
- HOTFIX: Changed
MemoryGraphList.tensor()
so that the correct dtype is given to the tensor output. This is important for model loading etc. - Added
CENTChargePlusElectrostaticEnergy
tokgcnn.layers.conv.hdnnp_conv
andkgcnn.literature.HDNNP4th
. - Fix bug in latest
train_force.py
of v2.2.2 that forgets to apply inverse scaling to dataset, causing subsequent folds to have wrong labels. - HOTFIX: Updated
MolDynamicsModelPredictor
to call keras model without very expensive retracing. Alternative mode useuse_predict=True
. - Update training results and data subclasses for matbench datasets.
- Added
GraphInstanceNormalization
andGraphNormalization
tokgcnn.layers.norm
.
kgcnn v2.2.2
- Reworked all scaler class to have separate name for using either X or y. For example
StandardScaler
orStandardLabelScaler
. - Moved scalers to
kgcnn.data.transform
. We will expand on this in the future. - IMPORTANT: Renamed and changed behaviour for
EnergyForceExtensiveScaler
. New name isEnergyForceExtensiveLabelScaler
. Return is just y now. Added experimental functionality for transforming dataset. - Adjusted training scripts for new scalers.
- Reduced requirements for tensorflow to 2.9.
- Renamed
kgcnn.md
tokgcnn.moldyn
for naming conflicts with markdown. - In
MolDynamicsModelPredictor
renamed argumentmodel_postprocessor
tograph_postprocessor
.
kgcnn v2.2.1
- HOTFIX: Removed
tensorflow_gpu
from setup.py - Added
HDNNP4th.py
to literature. - Fixed error in
ChangeTensorType
config for model save. - Merged pull request for #103 for
kgcnn.xai
.