You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the interesting work! I'd like to test the scalability of this framework but I have trouble finding the entry point of how to run DBTransformer. Do you have any instructions or an example script on how I could load data and train a DBTransformer on it?
Also, I found your paper saying that the data loading is done by online SQL queries. Does that mean (1) the entire dataset should reside in SQL, and (2) you do minibatch training with things like neighbor/subgraph sampling?
The text was updated successfully, but these errors were encountered:
Hi, thanks for taking an interest in this project. Here is a script which is modified version of main.py with exposed DB Server URL and target database config. Hope this helps with testing the DBTransformer on your own data.
For the second question, we currently have two data-loading approaches based on whether the dataset is too large to fit into memory. If the dataset is large, we use online SQL queries (ideally with a local version of the dataset) with BFS to get the subgraph. As a second option, if the dataset is small enough, it is possible to work the graph as a whole, which results in considerable learning speed.
Thanks for the interesting work! I'd like to test the scalability of this framework but I have trouble finding the entry point of how to run DBTransformer. Do you have any instructions or an example script on how I could load data and train a DBTransformer on it?
Also, I found your paper saying that the data loading is done by online SQL queries. Does that mean (1) the entire dataset should reside in SQL, and (2) you do minibatch training with things like neighbor/subgraph sampling?
The text was updated successfully, but these errors were encountered: