(git:34ef472)
torch_api Module Reference

Functions/Subroutines

subroutine, public torch_dict_create (dict)
 Creates an empty Torch dictionary. More...
 
subroutine, public torch_dict_release (dict)
 Releases a Torch dictionary and all its ressources. More...
 
subroutine, public torch_model_load (model, filename)
 Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules) More...
 
subroutine, public torch_model_eval (model, inputs, outputs)
 Evaluates the given Torch model. (In Torch lingo this operation is called forward()) More...
 
subroutine, public torch_model_release (model)
 Releases a Torch model and all its ressources. More...
 
character(:) function, allocatable, public torch_model_read_metadata (filename, key)
 Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files) More...
 
logical function, public torch_cuda_is_available ()
 Returns true iff the Torch CUDA backend is available. More...
 
subroutine, public torch_allow_tf32 (allow_tf32)
 Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html. More...
 
subroutine, public torch_model_freeze (model)
 Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html. More...
 

Function/Subroutine Documentation

◆ torch_dict_create()

subroutine, public torch_api::torch_dict_create ( type(torch_dict_type), intent(inout)  dict)

Creates an empty Torch dictionary.

Author
Ole Schuett

Definition at line 895 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_dict_release()

subroutine, public torch_api::torch_dict_release ( type(torch_dict_type), intent(inout)  dict)

Releases a Torch dictionary and all its ressources.

Author
Ole Schuett

Definition at line 919 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_load()

subroutine, public torch_api::torch_model_load ( type(torch_model_type), intent(inout)  model,
character(len=*), intent(in)  filename 
)

Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules)

Author
Ole Schuett

Definition at line 943 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_eval()

subroutine, public torch_api::torch_model_eval ( type(torch_model_type), intent(inout)  model,
type(torch_dict_type), intent(in)  inputs,
type(torch_dict_type), intent(inout)  outputs 
)

Evaluates the given Torch model. (In Torch lingo this operation is called forward())

Author
Ole Schuett

Definition at line 970 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_release()

subroutine, public torch_api::torch_model_release ( type(torch_model_type), intent(inout)  model)

Releases a Torch model and all its ressources.

Author
Ole Schuett

Definition at line 1003 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_read_metadata()

character(:) function, allocatable, public torch_api::torch_model_read_metadata ( character(len=*), intent(in)  filename,
character(len=*), intent(in)  key 
)

Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files)

Author
Ole Schuett

Definition at line 1027 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_cuda_is_available()

logical function, public torch_api::torch_cuda_is_available

Returns true iff the Torch CUDA backend is available.

Author
Ole Schuett

Definition at line 1079 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_allow_tf32()

subroutine, public torch_api::torch_allow_tf32 ( logical, intent(in)  allow_tf32)

Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html.

Author
Gabriele Tocci

Definition at line 1103 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_freeze()

subroutine, public torch_api::torch_model_freeze ( type(torch_model_type), intent(inout)  model)

Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html.

Author
Gabriele Tocci

Definition at line 1126 of file torch_api.F.

Here is the caller graph for this function: