fedbox.models.mlr

 1import torch
 2from torch import nn
 3
 4
 5class MultinomialLogisticRegression(nn.Module):
 6    '''
 7    This class implements the multinomial logistic regression
 8    for multiclass classification. The combination of this model
 9    and loss criterion yields a strongly convex loss objective.
10
11    Note
12    ----
13    A cross-entropy loss is utilized during optimization, in Pytorch
14    it is equivalent to log-softmax and negative log likelihood. Thus,
15    we do not need to apply softmax on the output layer, see [link](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss).
16    '''
17
18    def __init__(self, n_inputs: int, n_classes: int):
19        super().__init__()
20
21        self.flatten = nn.Flatten()
22        self.linear = nn.Linear(n_inputs, n_classes)
23
24    def forward(self, x: torch.Tensor) -> torch.Tensor:
25        x = self.flatten(x)
26        return self.linear(x)
class MultinomialLogisticRegression(torch.nn.modules.module.Module):
 6class MultinomialLogisticRegression(nn.Module):
 7    '''
 8    This class implements the multinomial logistic regression
 9    for multiclass classification. The combination of this model
10    and loss criterion yields a strongly convex loss objective.
11
12    Note
13    ----
14    A cross-entropy loss is utilized during optimization, in Pytorch
15    it is equivalent to log-softmax and negative log likelihood. Thus,
16    we do not need to apply softmax on the output layer, see [link](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss).
17    '''
18
19    def __init__(self, n_inputs: int, n_classes: int):
20        super().__init__()
21
22        self.flatten = nn.Flatten()
23        self.linear = nn.Linear(n_inputs, n_classes)
24
25    def forward(self, x: torch.Tensor) -> torch.Tensor:
26        x = self.flatten(x)
27        return self.linear(x)

This class implements the multinomial logistic regression for multiclass classification. The combination of this model and loss criterion yields a strongly convex loss objective.

Note

A cross-entropy loss is utilized during optimization, in Pytorch it is equivalent to log-softmax and negative log likelihood. Thus, we do not need to apply softmax on the output layer, see link.

MultinomialLogisticRegression(n_inputs: int, n_classes: int)
19    def __init__(self, n_inputs: int, n_classes: int):
20        super().__init__()
21
22        self.flatten = nn.Flatten()
23        self.linear = nn.Linear(n_inputs, n_classes)

Initialize internal Module state, shared by both nn.Module and ScriptModule.

flatten
linear
def forward(self, x: torch.Tensor) -> torch.Tensor:
25    def forward(self, x: torch.Tensor) -> torch.Tensor:
26        x = self.flatten(x)
27        return self.linear(x)

Define the computation performed at every call.

Should be overridden by all subclasses.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Inherited Members
torch.nn.modules.module.Module
dump_patches
training
call_super_init
register_buffer
register_parameter
add_module
register_module
get_submodule
get_parameter
get_buffer
get_extra_state
set_extra_state
apply
cuda
ipu
xpu
cpu
type
float
double
half
bfloat16
to_empty
to
register_full_backward_pre_hook
register_backward_hook
register_full_backward_hook
register_forward_pre_hook
register_forward_hook
register_state_dict_pre_hook
state_dict
register_load_state_dict_post_hook
load_state_dict
parameters
named_parameters
buffers
named_buffers
children
named_children
modules
named_modules
train
eval
requires_grad_
zero_grad
share_memory
extra_repr
compile