gatelfpytorchjson.CustomModule module

class gatelfpytorchjson.CustomModule.CustomModule(config={})[source]

Bases: torch.nn.modules.module.Module

get_lossfunction(config={})[source]
get_optimizer(config={})[source]
on_cuda()[source]

Returns true or false depending on if the module is on cuda or not. Unfortunately there is no API method in PyTorch for this so we get this from the first parameter of the model and cache it. NOTE: this must be called outside of the init() method, because the cuda status of the module gets set by the modelwrapper.

set_seed(seed)[source]