(git:d18deda)
Loading...
Searching...
No Matches
torch_api Module Reference

Data Types

type  torch_dict_type
 
interface  torch_model_get_attr
 
type  torch_model_type
 
interface  torch_tensor_data_ptr
 
interface  torch_tensor_from_array
 
type  torch_tensor_type
 

Functions/Subroutines

subroutine, public torch_tensor_backward (tensor, outer_grad)
 Runs autograd on a Torch tensor.
 
subroutine, public torch_tensor_grad (tensor, grad)
 Returns the gradient of a Torch tensor which was computed by autograd.
 
subroutine, public torch_tensor_release (tensor)
 Releases a Torch tensor and all its ressources.
 
subroutine, public torch_dict_create (dict)
 Creates an empty Torch dictionary.
 
subroutine, public torch_dict_insert (dict, key, tensor)
 Inserts a Torch tensor into a Torch dictionary.
 
subroutine, public torch_dict_get (dict, key, tensor)
 Retrieves a Torch tensor from a Torch dictionary.
 
subroutine, public torch_dict_release (dict)
 Releases a Torch dictionary and all its ressources.
 
subroutine, public torch_model_load (model, filename)
 Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules)
 
subroutine, public torch_model_forward (model, inputs, outputs)
 Evaluates the given Torch model.
 
subroutine, public torch_model_release (model)
 Releases a Torch model and all its ressources.
 
character(:) function, allocatable, public torch_model_read_metadata (filename, key)
 Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files)
 
logical function, public torch_cuda_is_available ()
 Returns true iff the Torch CUDA backend is available.
 
subroutine, public torch_allow_tf32 (allow_tf32)
 Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html.
 
subroutine, public torch_model_freeze (model)
 Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html.
 

Function/Subroutine Documentation

◆ torch_tensor_backward()

subroutine, public torch_api::torch_tensor_backward ( type(torch_tensor_type), intent(in)  tensor,
type(torch_tensor_type), intent(in)  outer_grad 
)

Runs autograd on a Torch tensor.

Author
Ole Schuett

Definition at line 936 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_tensor_grad()

subroutine, public torch_api::torch_tensor_grad ( type(torch_tensor_type), intent(in)  tensor,
type(torch_tensor_type), intent(inout)  grad 
)

Returns the gradient of a Torch tensor which was computed by autograd.

Author
Ole Schuett

Definition at line 969 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_tensor_release()

subroutine, public torch_api::torch_tensor_release ( type(torch_tensor_type), intent(inout)  tensor)

Releases a Torch tensor and all its ressources.

Author
Ole Schuett

Definition at line 998 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_dict_create()

subroutine, public torch_api::torch_dict_create ( type(torch_dict_type), intent(inout)  dict)

Creates an empty Torch dictionary.

Author
Ole Schuett

Definition at line 1022 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_dict_insert()

subroutine, public torch_api::torch_dict_insert ( type(torch_dict_type), intent(inout)  dict,
character(len=*), intent(in)  key,
type(torch_tensor_type), intent(in)  tensor 
)

Inserts a Torch tensor into a Torch dictionary.

Author
Ole Schuett

Definition at line 1046 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_dict_get()

subroutine, public torch_api::torch_dict_get ( type(torch_dict_type), intent(in)  dict,
character(len=*), intent(in)  key,
type(torch_tensor_type), intent(inout)  tensor 
)

Retrieves a Torch tensor from a Torch dictionary.

Author
Ole Schuett

Definition at line 1078 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_dict_release()

subroutine, public torch_api::torch_dict_release ( type(torch_dict_type), intent(inout)  dict)

Releases a Torch dictionary and all its ressources.

Author
Ole Schuett

Definition at line 1112 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_load()

subroutine, public torch_api::torch_model_load ( type(torch_model_type), intent(inout)  model,
character(len=*), intent(in)  filename 
)

Loads a Torch model from given "*.pth" file. (In Torch lingo models are called modules)

Author
Ole Schuett

Definition at line 1136 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_forward()

subroutine, public torch_api::torch_model_forward ( type(torch_model_type), intent(inout)  model,
type(torch_dict_type), intent(in)  inputs,
type(torch_dict_type), intent(inout)  outputs 
)

Evaluates the given Torch model.

Author
Ole Schuett

Definition at line 1168 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_release()

subroutine, public torch_api::torch_model_release ( type(torch_model_type), intent(inout)  model)

Releases a Torch model and all its ressources.

Author
Ole Schuett

Definition at line 1204 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_read_metadata()

character(:) function, allocatable, public torch_api::torch_model_read_metadata ( character(len=*), intent(in)  filename,
character(len=*), intent(in)  key 
)

Reads metadata entry from given "*.pth" file. (In Torch lingo they are called extra files)

Author
Ole Schuett

Definition at line 1228 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_cuda_is_available()

logical function, public torch_api::torch_cuda_is_available

Returns true iff the Torch CUDA backend is available.

Author
Ole Schuett

Definition at line 1285 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_allow_tf32()

subroutine, public torch_api::torch_allow_tf32 ( logical, intent(in)  allow_tf32)

Set whether to allow the use of TF32. Needed due to changes in defaults from pytorch 1.7 to 1.11 to >=1.12 See https://pytorch.org/docs/stable/notes/cuda.html.

Author
Gabriele Tocci

Definition at line 1309 of file torch_api.F.

Here is the caller graph for this function:

◆ torch_model_freeze()

subroutine, public torch_api::torch_model_freeze ( type(torch_model_type), intent(inout)  model)

Freeze the given Torch model: applies generic optimization that speed up model. See https://pytorch.org/docs/stable/generated/torch.jit.freeze.html.

Author
Gabriele Tocci

Definition at line 1332 of file torch_api.F.

Here is the caller graph for this function: