You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This PR adds support for training EMLE models using mini-batches. As training datasets continue to grow in size, this feature becomes essential to avoid memory issues. Unlike with the QM7 dataset, we can no longer fit everything into memory.
The implementation introduces three flags: --use-minibatch, which enables/disables mini-batch training; --batch-size, which specifies the size of each mini-batch, and --shuffle, which shuffles the training data. By default, training still uses the original full-batch optimization.
Thanks for this. Ignore the test failures. This is because sqm is currently completely broken with recent versions of ambertools. (There are glibc issues, so the package will likely need to be rebuilt.)
I've added a proof-of-concept implementation to perform the IVM and AEV calculations and to make the training step of valence widths "lazy", i.e. such that batches of masked AEVs are written to disk and loaded on the fly as needed. This is necessary because it is otherwise impossible to load large datasets into memory (the training does not go past the AEV computation, and it's impossible to store the aev_mols tensor in memory). I've been testing this on a dataset with ca. 0.5 M configurations, and it seems like a viable solution so far, although not the most performant. I'm keen to improve the implementation, so any suggestions are welcome!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds support for training EMLE models using mini-batches. As training datasets continue to grow in size, this feature becomes essential to avoid memory issues. Unlike with the QM7 dataset, we can no longer fit everything into memory.
The implementation introduces three flags:
--use-minibatch, which enables/disables mini-batch training;--batch-size, which specifies the size of each mini-batch, and--shuffle, which shuffles the training data. By default, training still uses the original full-batch optimization.