Go to the source code of this file.
|
subroutine, public | torch_api::torch_dict_create (dict) |
| Creates an empty Torch dictionary.
|
|
subroutine, public | torch_api::torch_dict_release (dict) |
| Releases a Torch dictionary and all its ressources.
|
|
subroutine, public | torch_api::torch_model_load (model, filename) |
| Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules)
|
|
subroutine, public | torch_api::torch_model_eval (model, inputs, outputs) |
| Evaluates the given Torch model. (In Torch lingo this operation is called forward())
|
|
subroutine, public | torch_api::torch_model_release (model) |
| Releases a Torch model and all its ressources.
|
|
character(:) function, allocatable, public | torch_api::torch_model_read_metadata (filename, key) |
| Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files)
|
|
logical function, public | torch_api::torch_cuda_is_available () |
| Returns true iff the Torch CUDA backend is available.
|
|
subroutine, public | torch_api::torch_allow_tf32 (allow_tf32) |
| Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html.
|
|
subroutine, public | torch_api::torch_model_freeze (model) |
| Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html.
|
|