Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interaction layer specific settings #92

Open
wants to merge 9 commits into
base: develop
Choose a base branch
from
38 changes: 38 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]

**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]

**Additional context**
Add any other context about the problem here.
20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.
101 changes: 10 additions & 91 deletions LICENSE.md

Large diffs are not rendered by default.

46 changes: 40 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,54 @@
# MACE
<span style="font-size:larger;">MACE</span>
========
[![GitHub release](https://img.shields.io/github/release/ACEsuit/mace.svg)](https://GitHub.com/ACEsuit/mace/releases/)
[![Paper](https://img.shields.io/badge/Paper-NeurIPs2022-blue)](https://openreview.net/forum?id=YPpSngE-ZU)
[![License](https://img.shields.io/badge/License-MIT%202.0-blue.svg)](https://opensource.org/licenses/mit)
[![GitHub issues](https://img.shields.io/github/issues/ACEsuit/mace.svg)](https://GitHub.com/ACEsuit/mace/issues/)
[![Documentation Status](https://readthedocs.org/projects/mace/badge/)](https://mace-docs.readthedocs.io/en/latest/)

# Table of contents
- [About MACE](#about-mace)
- [Documentation](#documentation)
- [Installation](#installation)
- [Usage](#usage)
- [Training](#training)
- [Evaluation](#evaluation)
- [Tutorial](#tutorial)
- [Weights and Biases](#weights-and-biases-for-experiment-tracking)
- [Development](#development)
- [References](#references)
- [Contact](#contact)
- [License](#license)


## About MACE
MACE provides fast and accurate machine learning interatomic potentials with higher order equivariant message passing.

This repository contains the MACE reference implementation developed by
Ilyes Batatia, Gregor Simm, and David Kovacs.

Also available:
* [MACE in JAX](https://github.com/ACEsuit/mace-jax), currently about 2x times faster at evaluation, but training is recommended in Pytorch for optimal performances.
* [MACE layers](https://github.com/ACEsuit/mace-layer) for constructing higher order equivariant graph neural networks for arbitrary 3D point clouds.

## Documentation

A partial documentation is available at: https://mace-docs.readthedocs.io/en/latest/

## Installation

Requirements:
* Python >= 3.7
* [PyTorch](https://pytorch.org/) >= 1.8
* [PyTorch](https://pytorch.org/) >= 1.12

(for openMM, use Python = 3.9)

### conda installation

If you do not have CUDA pre-installed, it is **recommended** to follow the conda installation process:
```sh
# Create a virtual environment and activate it
conda create mace_env
conda create --name mace_env
conda activate mace_env

# Install PyTorch
Expand All @@ -36,8 +70,8 @@ To install via `pip`, follow the steps below:
python -m venv mace-venv
source mace-venv/bin/activate

# Install PyTorch (for example, for CUDA 10.2 [cu102])
pip install torch==1.8.2 --extra-index-url "https://download.pytorch.org/whl/lts/1.8/cu102"
# Install PyTorch (for example, for CUDA 11.6 [cu116])
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

# Clone and install MACE (and all required packages)
git clone [email protected]:ACEsuit/mace.git
Expand Down Expand Up @@ -208,4 +242,4 @@ For bugs or feature requests, please use [GitHub Issues](https://github.com/ACEs

## License

MACE is published and distributed under the [MIT](MIT.md).
MACE is published and distributed under the [MIT License](MIT.md).
2 changes: 1 addition & 1 deletion mace/__version__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.1.0"
__version__ = "0.2.0"
124 changes: 121 additions & 3 deletions mace/calculators/mace.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@

from mace import data
from mace.tools import torch_geometric, torch_tools, utils
from typing import Union
from glob import glob
import numpy as np


class MACECalculator(Calculator):
Expand All @@ -25,7 +28,7 @@ def __init__(
energy_units_to_eV: float = 1.0,
length_units_to_A: float = 1.0,
default_dtype="float64",
**kwargs
**kwargs,
):
Calculator.__init__(self, **kwargs)
self.results = {}
Expand All @@ -40,6 +43,8 @@ def __init__(
)

torch_tools.set_default_dtype(default_dtype)
for param in self.model.parameters():
param.requires_grad = False

# pylint: disable=dangerous-default-value
def calculate(self, atoms=None, properties=None, system_changes=all_changes):
Expand Down Expand Up @@ -106,7 +111,7 @@ def __init__(
length_units_to_A: float = 1.0,
default_dtype="float64",
charges_key="Qs",
**kwargs
**kwargs,
):
"""
:param charges_key: str, Array field of atoms object where atomic charges are stored
Expand All @@ -124,6 +129,8 @@ def __init__(
self.charges_key = charges_key

torch_tools.set_default_dtype(default_dtype)
for param in self.model.parameters():
param.requires_grad = False

# pylint: disable=dangerous-default-value
def calculate(self, atoms=None, properties=None, system_changes=all_changes):
Expand Down Expand Up @@ -180,7 +187,7 @@ def __init__(
length_units_to_A: float = 1.0,
default_dtype="float64",
charges_key="Qs",
**kwargs
**kwargs,
):
"""
:param charges_key: str, Array field of atoms object where atomic charges are stored
Expand All @@ -199,6 +206,8 @@ def __init__(
self.charges_key = charges_key

torch_tools.set_default_dtype(default_dtype)
for param in self.model.parameters():
param.requires_grad = False

# pylint: disable=dangerous-default-value
def calculate(self, atoms=None, properties=None, system_changes=all_changes):
Expand Down Expand Up @@ -250,3 +259,112 @@ def calculate(self, atoms=None, properties=None, system_changes=all_changes):
self.results["stress"] = (
stress * (self.energy_units_to_eV / self.length_units_to_A**3)
)[0]


class MACECommitteeCalculator(Calculator):
"""MACE ASE Committee Calculator

This calculator can be used to obtain energy and force errors from a committee of
MACE models. To load multiple committees, either pass a list of file names or a
single string with a wildcard to the model_paths argument. This calculator can also
be used to load single models.
"""

implemented_properties = ["energy", "free_energy", "forces", "stress"]

def __init__(
self,
model_paths: Union[list, str],
device: str,
energy_units_to_eV: float = 1.0,
length_units_to_A: float = 1.0,
default_dtype="float64",
**kwargs,
):
Calculator.__init__(self, **kwargs)
self.results = {}

if type(model_paths) == str:
# Find all models that staisfy the wildcard (e.g. mace_model_*.pt)
model_paths_glob = glob(model_paths)

if len(model_paths_glob) == 0:
raise ValueError(f"Couldn't find MACE model files: {model_paths}")
else:
model_paths = model_paths_glob
if len(model_paths) == 0:
raise ValueError(f"No mace file neames supplied")
elif len(model_paths) > 1:
print(f"Running committee mace with {len(model_paths)} models")

# Load models
self.models = [
torch.load(f=model_path, map_location=device) for model_path in model_paths
]

r_maxs = [model.r_max.cpu() for model in self.models]
r_maxs = np.array(r_maxs)
assert np.all(
r_maxs == r_maxs[0]
), f"committee r_max are not all the same {' '.join(r_maxs)}"
self.r_max = r_maxs[0]

self.device = torch_tools.init_device(device)
self.energy_units_to_eV = energy_units_to_eV
self.length_units_to_A = length_units_to_A
self.z_table = utils.AtomicNumberTable(
[int(z) for z in self.models[0].atomic_numbers]
)

torch_tools.set_default_dtype(default_dtype)

# pylint: disable=dangerous-default-value
def calculate(self, atoms=None, properties=None, system_changes=all_changes):
"""
Calculate properties.
:param atoms: ase.Atoms object
:param properties: [str], properties to be computed, used by ASE internally
:param system_changes: [str], system changes since last calculation, used by
ASE internally
:return:
"""
# call to base-class to set atoms attribute
Calculator.calculate(self, atoms)

# prepare data
config = data.config_from_atoms(atoms)
data_loader = torch_geometric.dataloader.DataLoader(
dataset=[
data.AtomicData.from_config(
config, z_table=self.z_table, cutoff=self.r_max
)
],
batch_size=1,
shuffle=False,
drop_last=False,
)
batch = next(iter(data_loader)).to(self.device)

# predict + extract data
energies, forces = [], []
for i, model in enumerate(self.models):
# Otherwise: RuntimeError: you can only change requires_grad flags of leaf variables.
batch = next(iter(data_loader)).to(self.device)
out = model(batch, compute_stress=True)
energies.append(out["energy"].detach().cpu().item())
forces.append(out["forces"].detach().cpu().numpy())

# convert_units
energies = np.array(energies) * self.energy_units_to_eV
# force has units eng / len:
forces = np.array(forces) * self.energy_units_to_eV / self.length_units_to_A
# store results
self.results = {
"energies": energies,
"forcess": forces,
"energy": np.mean(energies),
"free_energy": np.mean(energies),
"energy_var": np.var(energies),
"forces": np.mean(forces, axis=0),
"forces_var": np.var(forces, axis=0),
}
21 changes: 19 additions & 2 deletions mace/data/atomic_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
from typing import Optional, Sequence

import torch.utils.data
import numpy as np

from mace.tools import (
AtomicNumberTable,
Expand Down Expand Up @@ -46,6 +47,7 @@ class AtomicData(torch_geometric.data.Data):
def __init__(
self,
edge_index: torch.Tensor, # [2, n_edges]
edge_index_mask: Optional[torch.Tensor], # [n_layers, n_edges]
node_attrs: torch.Tensor, # [n_nodes, n_node_feats]
positions: torch.Tensor, # [n_nodes, 3]
shifts: torch.Tensor, # [n_edges, 3],
Expand All @@ -67,6 +69,7 @@ def __init__(
num_nodes = node_attrs.shape[0]

assert edge_index.shape[0] == 2 and len(edge_index.shape) == 2
assert edge_index_mask.shape[1] == edge_index.shape[1]
assert positions.shape == (num_nodes, 3)
assert shifts.shape[1] == 3
assert unit_shifts.shape[1] == 3
Expand All @@ -87,6 +90,7 @@ def __init__(
data = {
"num_nodes": num_nodes,
"edge_index": edge_index,
'edge_mask': edge_index_mask.T,
"positions": positions,
"shifts": shifts,
"unit_shifts": unit_shifts,
Expand All @@ -108,11 +112,23 @@ def __init__(

@classmethod
def from_config(
cls, config: Configuration, z_table: AtomicNumberTable, cutoff: float
cls, config: Configuration, z_table: AtomicNumberTable, cutoff: list
) -> "AtomicData":

# Get egdge index for larges cutoff
edge_index, shifts, unit_shifts = get_neighborhood(
positions=config.positions, cutoff=cutoff, pbc=config.pbc, cell=config.cell
positions=config.positions,
cutoff=torch.max(cutoff).item(),
pbc=config.pbc,
cell=config.cell,
)

# Create edge mask for each cutoff distance
edge_distance = np.linalg.norm(
config.positions[edge_index[0]] - config.positions[edge_index[1]] - shifts,
axis=1,
)
edge_index_mask = torch.tensor(edge_distance, device=cutoff.device) < cutoff[:, None]
indices = atomic_numbers_to_indices(config.atomic_numbers, z_table=z_table)
one_hot = to_one_hot(
torch.tensor(indices, dtype=torch.long).unsqueeze(-1),
Expand Down Expand Up @@ -192,6 +208,7 @@ def from_config(

return cls(
edge_index=torch.tensor(edge_index, dtype=torch.long),
edge_index_mask=torch.tensor(edge_index_mask, dtype=torch.bool),
positions=torch.tensor(config.positions, dtype=torch.get_default_dtype()),
shifts=torch.tensor(shifts, dtype=torch.get_default_dtype()),
unit_shifts=torch.tensor(unit_shifts, dtype=torch.get_default_dtype()),
Expand Down
Loading