![]() |
(git:d18deda)
|
Data Types | |
type | torch_dict_type |
interface | torch_model_get_attr |
type | torch_model_type |
interface | torch_tensor_data_ptr |
interface | torch_tensor_from_array |
type | torch_tensor_type |
Functions/Subroutines | |
subroutine, public | torch_tensor_backward (tensor, outer_grad) |
Runs autograd on a Torch tensor. | |
subroutine, public | torch_tensor_grad (tensor, grad) |
Returns the gradient of a Torch tensor which was computed by autograd. | |
subroutine, public | torch_tensor_release (tensor) |
Releases a Torch tensor and all its ressources. | |
subroutine, public | torch_dict_create (dict) |
Creates an empty Torch dictionary. | |
subroutine, public | torch_dict_insert (dict, key, tensor) |
Inserts a Torch tensor into a Torch dictionary. | |
subroutine, public | torch_dict_get (dict, key, tensor) |
Retrieves a Torch tensor from a Torch dictionary. | |
subroutine, public | torch_dict_release (dict) |
Releases a Torch dictionary and all its ressources. | |
subroutine, public | torch_model_load (model, filename) |
Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules) | |
subroutine, public | torch_model_forward (model, inputs, outputs) |
Evaluates the given Torch model. | |
subroutine, public | torch_model_release (model) |
Releases a Torch model and all its ressources. | |
character(:) function, allocatable, public | torch_model_read_metadata (filename, key) |
Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files) | |
logical function, public | torch_cuda_is_available () |
Returns true iff the Torch CUDA backend is available. | |
subroutine, public | torch_allow_tf32 (allow_tf32) |
Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html. | |
subroutine, public | torch_model_freeze (model) |
Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html. | |
subroutine, public torch_api::torch_tensor_backward | ( | type(torch_tensor_type), intent(in) | tensor, |
type(torch_tensor_type), intent(in) | outer_grad | ||
) |
Runs autograd on a Torch tensor.
Definition at line 936 of file torch_api.F.
subroutine, public torch_api::torch_tensor_grad | ( | type(torch_tensor_type), intent(in) | tensor, |
type(torch_tensor_type), intent(inout) | grad | ||
) |
Returns the gradient of a Torch tensor which was computed by autograd.
Definition at line 969 of file torch_api.F.
subroutine, public torch_api::torch_tensor_release | ( | type(torch_tensor_type), intent(inout) | tensor | ) |
Releases a Torch tensor and all its ressources.
Definition at line 998 of file torch_api.F.
subroutine, public torch_api::torch_dict_create | ( | type(torch_dict_type), intent(inout) | dict | ) |
Creates an empty Torch dictionary.
Definition at line 1022 of file torch_api.F.
subroutine, public torch_api::torch_dict_insert | ( | type(torch_dict_type), intent(inout) | dict, |
character(len=*), intent(in) | key, | ||
type(torch_tensor_type), intent(in) | tensor | ||
) |
Inserts a Torch tensor into a Torch dictionary.
Definition at line 1046 of file torch_api.F.
subroutine, public torch_api::torch_dict_get | ( | type(torch_dict_type), intent(in) | dict, |
character(len=*), intent(in) | key, | ||
type(torch_tensor_type), intent(inout) | tensor | ||
) |
Retrieves a Torch tensor from a Torch dictionary.
Definition at line 1078 of file torch_api.F.
subroutine, public torch_api::torch_dict_release | ( | type(torch_dict_type), intent(inout) | dict | ) |
Releases a Torch dictionary and all its ressources.
Definition at line 1112 of file torch_api.F.
subroutine, public torch_api::torch_model_load | ( | type(torch_model_type), intent(inout) | model, |
character(len=*), intent(in) | filename | ||
) |
Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules)
Definition at line 1136 of file torch_api.F.
subroutine, public torch_api::torch_model_forward | ( | type(torch_model_type), intent(inout) | model, |
type(torch_dict_type), intent(in) | inputs, | ||
type(torch_dict_type), intent(inout) | outputs | ||
) |
Evaluates the given Torch model.
Definition at line 1168 of file torch_api.F.
subroutine, public torch_api::torch_model_release | ( | type(torch_model_type), intent(inout) | model | ) |
Releases a Torch model and all its ressources.
Definition at line 1204 of file torch_api.F.
character(:) function, allocatable, public torch_api::torch_model_read_metadata | ( | character(len=*), intent(in) | filename, |
character(len=*), intent(in) | key | ||
) |
Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files)
Definition at line 1228 of file torch_api.F.
logical function, public torch_api::torch_cuda_is_available |
Returns true iff the Torch CUDA backend is available.
Definition at line 1285 of file torch_api.F.
subroutine, public torch_api::torch_allow_tf32 | ( | logical, intent(in) | allow_tf32 | ) |
Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html.
Definition at line 1309 of file torch_api.F.
subroutine, public torch_api::torch_model_freeze | ( | type(torch_model_type), intent(inout) | model | ) |
Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html.
Definition at line 1332 of file torch_api.F.