(git:34ef472)
|
Functions/Subroutines | |
subroutine, public | torch_dict_create (dict) |
Creates an empty Torch dictionary. More... | |
subroutine, public | torch_dict_release (dict) |
Releases a Torch dictionary and all its ressources. More... | |
subroutine, public | torch_model_load (model, filename) |
Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules) More... | |
subroutine, public | torch_model_eval (model, inputs, outputs) |
Evaluates the given Torch model. (In Torch lingo this operation is called forward()) More... | |
subroutine, public | torch_model_release (model) |
Releases a Torch model and all its ressources. More... | |
character(:) function, allocatable, public | torch_model_read_metadata (filename, key) |
Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files) More... | |
logical function, public | torch_cuda_is_available () |
Returns true iff the Torch CUDA backend is available. More... | |
subroutine, public | torch_allow_tf32 (allow_tf32) |
Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html. More... | |
subroutine, public | torch_model_freeze (model) |
Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html. More... | |
subroutine, public torch_api::torch_dict_create | ( | type(torch_dict_type), intent(inout) | dict | ) |
Creates an empty Torch dictionary.
Definition at line 895 of file torch_api.F.
subroutine, public torch_api::torch_dict_release | ( | type(torch_dict_type), intent(inout) | dict | ) |
Releases a Torch dictionary and all its ressources.
Definition at line 919 of file torch_api.F.
subroutine, public torch_api::torch_model_load | ( | type(torch_model_type), intent(inout) | model, |
character(len=*), intent(in) | filename | ||
) |
Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules)
Definition at line 943 of file torch_api.F.
subroutine, public torch_api::torch_model_eval | ( | type(torch_model_type), intent(inout) | model, |
type(torch_dict_type), intent(in) | inputs, | ||
type(torch_dict_type), intent(inout) | outputs | ||
) |
Evaluates the given Torch model. (In Torch lingo this operation is called forward())
Definition at line 970 of file torch_api.F.
subroutine, public torch_api::torch_model_release | ( | type(torch_model_type), intent(inout) | model | ) |
Releases a Torch model and all its ressources.
Definition at line 1003 of file torch_api.F.
character(:) function, allocatable, public torch_api::torch_model_read_metadata | ( | character(len=*), intent(in) | filename, |
character(len=*), intent(in) | key | ||
) |
Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files)
Definition at line 1027 of file torch_api.F.
logical function, public torch_api::torch_cuda_is_available |
Returns true iff the Torch CUDA backend is available.
Definition at line 1079 of file torch_api.F.
subroutine, public torch_api::torch_allow_tf32 | ( | logical, intent(in) | allow_tf32 | ) |
Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html.
Definition at line 1103 of file torch_api.F.
subroutine, public torch_api::torch_model_freeze | ( | type(torch_model_type), intent(inout) | model | ) |
Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html.
Definition at line 1126 of file torch_api.F.