You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
run Elmo example, using my own data, which is formatted the same as the example data("word[tab]tag" each line)
training file: about 130mb;
training batch_size: tried from 32 to 512;
training epoch:1;
elmo model: my own trained Elmo ;
own Elmo options file:
{"lstm": {"use_skip_connections": true, "projection_dim": 512, "cell_clip": 3, "proj_clip": 3, "dim": 4096, "n_layers": 2}, "char_cnn": {"activation": "relu", "filters": [[1, 32], [2, 32], [3, 64], [4, 128], [5, 256], [6, 512], [7, 1024]], "n_highway": 2, "embedding": {"dim": 16}, "n_characters": 262, "max_characters_per_token": 50}};
other training option: is set to default;
The situation is : after training some steps(according to batch_size), program will be killed by system, but I don't see a system or GPU memory leak.
The question is: how did that happen? What did I do wrong? Is it my batch_size set too much or my training data too big? Someone HELP!!!
batch size 64, killed by OS after 123 steps
The text was updated successfully, but these errors were encountered: