Skip to content

Latest commit

 

History

History
229 lines (226 loc) · 38.5 KB

CHANGELOG.md

File metadata and controls

229 lines (226 loc) · 38.5 KB

Changelog

All notable changes to this project will be documented in this file. The format is based on Keep a Changelog.

[2.2.0] - 2022-MM-DD

Added

  • Added a return_semantic_attention_weights argument HANConv (#5787)
  • Added disjoint argument to NeighborLoader and LinkNeighborLoader (#5775)
  • Added support for input_time in NeighborLoader (#5763)
  • Added disjoint mode for temporal LinkNeighborLoader (#5717)
  • Added HeteroData support for transforms.Constant (#5700)
  • Added np.memmap support in NeighborLoader (#5696)
  • Added assortativity that computes degree assortativity coefficient (#5587)
  • Added SSGConv layer (#5599)
  • Added shuffle_node, mask_feature and add_random_edge augmentation methdos (#5548)
  • Added dropout_path augmentation that drops edges from a graph based on random walks (#5531)
  • Add support for filling labels with dummy values in HeteroData.to_homogeneous() (#5540)
  • Added temporal_strategy option to neighbor_sample (#5576)
  • Added torch_geometric.sampler package to docs (#5563)
  • Added the DGraphFin dynamic graph dataset (#5504)
  • Added dropout_edge augmentation that randomly drops edges from a graph - the usage of dropout_adj is now deprecated (#5495)
  • Add support for precomputed edges in SchNet model (#5401)
  • Added dropout_node augmentation that randomly drops nodes from a graph (#5481)
  • Added AddRandomMetaPaths that adds edges based on random walks along a metapath (#5397)
  • Added WLConvContinuous for performing WL refinement with continuous attributes (#5316)
  • Added print_summary method for the torch_geometric.data.Dataset interface (#5438)
  • Added sampler support to LightningDataModule (#5456, #5457)
  • Added official splits to MalNetTiny dataset (#5078)
  • Added IndexToMask and MaskToIndex transforms (#5375, #5455)
  • Added FeaturePropagation transform (#5387)
  • Added PositionalEncoding (#5381)
  • Consolidated sampler routines behind torch_geometric.sampler, enabling ease of extensibility in the future (#5312, #5365, #5402, #5404), #5418)
  • Added pyg-lib neighbor sampling (#5384, #5388)
  • Added pyg_lib.segment_matmul integration within HeteroLinear (#5330, #5347))
  • Enabled bf16 support in benchmark scripts (#5293, #5341)
  • Added Aggregation.set_validate_args option to skip validation of dim_size (#5290)
  • Added SparseTensor support to inference benchmark suite (#5242, #5258)
  • Added experimental mode in inference benchmarks (#5254)
  • Added node classification example instrumented with Weights and Biases (W&B) logging and W&B Sweeps (#5192)
  • Added experimental mode for utils.scatter (#5232, #5241, #5386)
  • Added missing test labels in HGBDataset (#5233)
  • Added BaseStorage.get() functionality (#5240)
  • Added a test to confirm that to_hetero works with SparseTensor (#5222)

Changed

  • Fixed path in hetero_conv_dblp.py example (#5686)
  • Fix auto_select_device routine in GraphGym for PyTorch Lightning>=1.7 (#5677)
  • Support in_channels with tuple in GENConv for bipartite message passing (#5627, #5641)
  • Handle cases of not having enough possible negative edges in RandomLinkSplit (#5642)
  • Fix RGCN+pyg-lib for LongTensor input (#5610)
  • Improved type hint support (#5603, #5659, #5664, #5665, #5666, #5667, #5668, #5669, #5673, #5675, #5673, #5678, #5682, #5683, #5684, #5685, #5687, #5688, #5695, #5699, #5701, #5702, #5703, #5706, #5707, #5710, #5714, #5715, #5716, #5722, #5724, #5725, #5726, #5729, #5730, #5731, #5732, #5733, #5743, #5734, #5735, #5736, #5737, #5738, #5747, #5752, #5753, #5754, #5756, #5757, #5758, #5760, #5766, #5767, #5768), #5781, #5778, #5797, #5798, #5799, #5800)
  • Avoid modifying mode_kwargs in MultiAggregation (#5601)
  • Changed BatchNorm to allow for batches of size one during training (#5530, #5614)
  • Integrated better temporal sampling support by requiring that local neighborhoods are sorted according to time (#5516, #5602)
  • Fixed a bug when applying several scalers with PNAConv (#5514)
  • Allow . in ParameterDict key names (#5494)
  • Renamed drop_unconnected_nodes to drop_unconnected_node_types and drop_orig_edges to drop_orig_edge_types in AddMetapaths (#5490)
  • Improved utils.scatter performance by explicitly choosing better implementation for add and mean reduction (#5399)
  • Fix to_dense_adj with empty edge_index (#5476)
  • The AttentionalAggregation module can now be applied to compute attentin on a per-feature level (#5449)
  • Ensure equal lenghts of num_neighbors across edge types in NeighborLoader (#5444)
  • Fixed a bug in TUDataset in which node features were wrongly constructed whenever node_attributes only hold a single feature (e.g., in PROTEINS) (#5441)
  • Breaking change: removed num_neighbors as an attribute of loader (#5404)
  • ASAPooling is now jittable (#5395)
  • Updated unsupervised GraphSAGE example to leverage LinkNeighborLoader (#5317)
  • Replace in-place operations with out-of-place ones to align with torch.scatter_reduce API (#5353)
  • Breaking bugfix: PointTransformerConv now correctly uses sum aggregation (#5332)
  • Improve out-of-bounds error message in MessagePassing (#5339)
  • Allow file names of a Dataset to be specified as either property and method (#5338)
  • Fixed separating a list of SparseTensor within InMemoryDataset (#5299)
  • Improved name resolving of normalization layers (#5277)
  • Fail gracefully on GLIBC errors within torch-spline-conv (#5276)
  • Fixed Dataset.num_classes in case a transform modifies data.y (#5274)
  • Allow customization of the activation function within PNAConv (#5262)
  • Do not fill InMemoryDataset cache on dataset.num_features (#5264)
  • Changed tests relying on dblp datasets to instead use synthetic data (#5250)
  • Fixed a bug for the initialization of activation function examples in custom_graphgym (#5243)
  • Allow any integer tensors when checking edge_index input to message passing (5281)

Removed

  • Removed scatter_reduce option from experimental mode (#5399)

[2.1.0] - 2022-08-17

Added

  • Added the test for DeepGCNLayer (#5704)
  • Allow . in ModuleDict key names (#5227)
  • Added edge_label_time argument to LinkNeighborLoader (#5137, #5173)
  • Let ImbalancedSampler accept torch.Tensor as input (#5138)
  • Added flow argument to gcn_norm to correctly normalize the adjacency matrix in GCNConv (#5149)
  • NeighborSampler supports graphs without edges (#5072)
  • Added the MeanSubtractionNorm layer (#5068)
  • Added pyg_lib.segment_matmul integration within RGCNConv (#5052, #5096)
  • Support SparseTensor as edge label in LightGCN (#5046)
  • Added support for BasicGNN models within to_hetero (#5091)
  • Added support for computing weighted metapaths in AddMetapaths (#5049)
  • Added inference benchmark suite (#4915)
  • Added a dynamically sized batch sampler for filling a mini-batch with a variable number of samples up to a maximum size (#4972)
  • Added fine grained options for setting bias and dropout per layer in the MLP model (#4981)
  • Added EdgeCNN model (#4991)
  • Added scalable inference mode in BasicGNN with layer-wise neighbor loading (#4977)
  • Added inference benchmarks (#4892, #5107)
  • Added PyTorch 1.12 support (#4975)
  • Added unbatch_edge_index functionality for splitting an edge_index tensor according to a batch vector (#4903)
  • Added node-wise normalization mode in LayerNorm (#4944)
  • Added support for normalization_resolver (#4926, #4951, #4958, #4959)
  • Added notebook tutorial for torch_geometric.nn.aggr package to documentation (#4927)
  • Added support for follow_batch for lists or dictionaries of tensors (#4837)
  • Added Data.validate() and HeteroData.validate() functionality (#4885)
  • Added LinkNeighborLoader support to LightningDataModule (#4868)
  • Added predict() support to the LightningNodeData module (#4884)
  • Added time_attr argument to LinkNeighborLoader (#4877, #4908)
  • Added a filter_per_worker argument to data loaders to allow filtering of data within sub-processes (#4873)
  • Added a NeighborLoader benchmark script (#4815, #4862)
  • Added support for FeatureStore and GraphStore in NeighborLoader (#4817, #4851, #4854, #4856, #4857, #4882, #4883, #4929, #4992, #4962, #4968, #5037, #5088, #5270, #5307, #5318)
  • Added a normalize parameter to dense_diff_pool (#4847)
  • Added size=None explanation to jittable MessagePassing modules in the documentation (#4850)
  • Added documentation to the DataLoaderIterator class (#4838)
  • Added GraphStore support to Data and HeteroData (#4816)
  • Added FeatureStore support to Data and HeteroData (#4807, #4853)
  • Added FeatureStore and GraphStore abstractions (#4534, #4568, #5120)
  • Added support for dense aggregations in global_*_pool (#4827)
  • Added Python version requirement (#4825)
  • Added TorchScript support to JumpingKnowledge module (#4805)
  • Added a max_sample argument to AddMetaPaths in order to tackle very dense metapath edges (#4750)
  • Test HANConv with empty tensors (#4756, #4841)
  • Added the bias vector to the GCN model definition in the "Create Message Passing Networks" tutorial (#4755)
  • Added transforms.RootedSubgraph interface with two implementations: RootedEgoNets and RootedRWSubgraph (#3926)
  • Added ptr vectors for follow_batch attributes within Batch.from_data_list (#4723)
  • Added torch_geometric.nn.aggr package (#4687, #4721, #4731, #4762, #4749, #4779, #4863, #4864, #4865, #4866, #4872, #4934, #4935, #4957, #4973, #4973, #4986, #4995, #5000, #5034, #5036, #5039, #4522, #5033, #5085, #5097, #5099, #5104, #5113, #5130, #5098, #5191)
  • Added the DimeNet++ model (#4432, #4699, #4700, #4800)
  • Added an example of using PyG with PyTorch Ignite (#4487)
  • Added GroupAddRev module with support for reducing training GPU memory (#4671, #4701, #4715, #4730)
  • Added benchmarks via wandb (#4656, #4672, #4676)
  • Added unbatch functionality (#4628)
  • Confirm that to_hetero() works with custom functions, e.g., dropout_adj (4653)
  • Added the MLP.plain_last=False option (4652)
  • Added a check in HeteroConv and to_hetero() to ensure that MessagePassing.add_self_loops is disabled (4647)
  • Added HeteroData.subgraph(), HeteroData.node_type_subgraph() and HeteroData.edge_type_subgraph() support (#4635)
  • Added the AQSOL dataset (#4626)
  • Added HeteroData.node_items() and HeteroData.edge_items() functionality (#4644)
  • Added PyTorch Lightning support in GraphGym (#4511, #4516 #4531, #4689, #4843)
  • Added support for returning embeddings in MLP models (#4625)
  • Added faster initialization of NeighborLoader in case edge indices are already sorted (via is_sorted=True) (#4620, #4702)
  • Added AddPositionalEncoding transform (#4521)
  • Added HeteroData.is_undirected() support (#4604)
  • Added the Genius and Wiki datasets to nn.datasets.LINKXDataset (#4570, #4600)
  • Added nn.aggr.EquilibrumAggregation implicit global layer (#4522)
  • Added support for graph-level outputs in to_hetero (#4582)
  • Added CHANGELOG.md (#4581)
  • Added HeteroData support to the RemoveIsolatedNodes transform (#4479)
  • Added HeteroData.num_features functionality (#4504)
  • Added support for projecting features before propagation in SAGEConv (#4437)
  • Added Geom-GCN splits to the Planetoid datasets (#4442)
  • Added a LinkNeighborLoader for training scalable link predictions models #4396, #4439, #4441, #4446, #4508, #4509)
  • Added an unsupervised GraphSAGE example on PPI (#4416)
  • Added support for LSTM aggregation in SAGEConv (#4379)
  • Added support for floating-point labels in RandomLinkSplit (#4311, #4383)
  • Added support for torch.data DataPipes (#4302, #4345, #4349)
  • Added support for the cosine argument in the KNNGraph/RadiusGraph transforms (#4344)
  • Added support graph-level attributes in networkx conversion (#4343)
  • Added support for renaming node types via HeteroData.rename (#4329)
  • Added an example to load a trained PyG model in C++ (#4307)
  • Added a MessagePassing.explain_message method to customize making explanations on messages (#4278, #4448))
  • Added support for GATv2Conv in the nn.models.GAT model (#4357)
  • Added HeteroData.subgraph functionality (#4243)
  • Added the MaskLabel module and a corresponding masked label propagation example (#4197)
  • Added temporal sampling support to NeighborLoader (#4025)
  • Added an example for unsupervised heterogeneous graph learning based on "Deep Multiplex Graph Infomax" (#3189)

Changed

  • Changed docstring for RandomLinkSplit (#5190)
  • Switched to PyTorch scatter_reduce implementation - experimental feature (#5120)
  • Fixed RGATConv device mismatches for f-scaled mode (#5187]
  • Allow for multi-dimensional edge_labels in LinkNeighborLoader (#5186]
  • Fixed GINEConv bug with non-sequential input (#5154]
  • Improved error message (#5095)
  • Fixed HGTLoader bug which produced outputs with missing edge types (#5067)
  • Fixed dynamic inheritance issue in data batching (#5051)
  • Fixed load_state_dict in Linear with strict=False mode (5094)
  • Fixed typo in MaskLabel.ratio_mask (5093)
  • Fixed data.num_node_features computation for sparse matrices (5089)
  • Fixed torch.fx bug with torch.nn.aggr package (#5021))
  • Fixed GenConv test (4993)
  • Fixed packaging tests for Python 3.10 (4982)
  • Changed act_dict (part of graphgym) to create individual instances instead of reusing the same ones everywhere (4978)
  • Fixed issue where one-hot tensors were passed to F.one_hot (4970)
  • Fixed bool arugments in argparse in benchmark/ (#4967)
  • Fixed BasicGNN for num_layers=1, which now respects a desired number of out_channels (#4943)
  • len(batch) will now return the number of graphs inside the batch, not the number of attributes (#4931)
  • Fixed data.subgraph generation for 0-dim tensors (#4932)
  • Removed unnecssary inclusion of self-loops when sampling negative edges (#4880)
  • Fixed InMemoryDataset inferring wrong len for lists of tensors (#4837)
  • Fixed Batch.separate when using it for lists of tensors (#4837)
  • Correct docstring for SAGEConv (#4852)
  • Fixed a bug in TUDataset where pre_filter was not applied whenever pre_transform was present
  • Renamed RandomTranslate to RandomJitter - the usage of RandomTranslate is now deprecated (#4828)
  • Do not allow accessing edge types in HeteroData with two node types when there exists multiple relations between these types (#4782)
  • Allow edge_type == rev_edge_type argument in RandomLinkSplit (#4757, #5221)
  • Fixed a numerical instability in the GeneralConv and neighbor_sample tests (#4754)
  • Fixed a bug in HANConv in which destination node features rather than source node features were propagated (#4753)
  • Fixed versions of checkout and setup-python in CI (#4751)
  • Fixed protobuf version (#4719)
  • Fixed the ranking protocol bug in the RGCN link prediction example (#4688)
  • Math support in Markdown (#4683)
  • Allow for setter properties in Data (#4682, #4686)
  • Allow for optional edge_weight in GCN2Conv (#4670)
  • Fixed the interplay between TUDataset and pre_transform that modify node features (#4669)
  • Make use of the pyg_sphinx_theme documentation template (#4664, #4667)
  • Refactored reading molecular positions from sdf file for qm9 datasets (4654)
  • Fixed MLP.jittable() bug in case return_emb=True (#4645, #4648)
  • The generated node features of StochasticBlockModelDataset are now ordered with respect to their labels (#4617)
  • Fixed typos in the documentation (#4616, #4824, #4895, #5161)
  • The bias argument in TAGConv is now actually applied (#4597)
  • Fixed subclass behaviour of process and download in Datsaet (#4586)
  • Fixed filtering of attributes for loaders in case __cat_dim__ != 0 (#4629)
  • Fixed SparseTensor support in NeighborLoader (#4320)
  • Fixed average degree handling in PNAConv (#4312)
  • Fixed a bug in from_networkx in case some attributes are PyTorch tensors (#4486)
  • Added a missing clamp in DimeNet (#4506, #4562)
  • Fixed the download link in DBP15K (#4428)
  • Fixed an autograd bug in DimeNet when resetting parameters (#4424)
  • Fixed bipartite message passing in case flow="target_to_source" (#4418)
  • Fixed a bug in which num_nodes was not properly updated in the FixedPoints transform (#4394)
  • PyTorch Lightning >= 1.6 support (#4377)
  • Fixed a bug in which GATConv was not jittable (#4347)
  • Fixed a bug in which the GraphGym config was not stored in each specific experiment directory (#4338)
  • Fixed a bug in which nn.models.GAT did not produce out_channels-many output channels (#4299)
  • Fixed mini-batching with empty lists as attributes (#4293)
  • Fixed a bug in which GCNConv could not be combined with to_hetero on heterogeneous graphs with one node type (#4279)

Removed

  • Remove internal metrics in favor of torchmetrics (#4287)