callbacks

Callback

class Callback

Base class for all callbacks.

Child callbacks should subclass this and override methods they want to actually do something on. all methods will be passed the loop and the on_event method will additonally be passed an event.

active: bool = True
on_registration(loop)

this will be called when the loop sets up the callbacks before any training starts.

on_stage_start(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.

on_stage_end(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.

on_epoch_start(loop)

This will be called on when each epoch starts.

on_epoch_end(loop)

This will be called on when each epoch ends.

on_batch_start(loop)

this will be called before the batch is passed to the model on each stage

on_batch_end(loop)

this will be called after each batch is proccessed (for each stage)

on_loss_start(loop)

this will be called for each batch before loss is calculated (for each stage)

on_loss_end(loop)

this will be called for each batch after the loss is calculated (for each stage)

on_step_start(loop)

this will be called for before the optimizer step (for training stage only)

on_step_end(loop)

this will be called after the optimizer step (for training stage only)

on_metric_start(loop)

this will be called just before metrics are calculated (for each batch)

on_metric_end(loop)

this will be called just after are metrics calculated (for each batch)

on_backward_start(loop)

this will be called just before backward is called on the loss (for each batch)

on_backward_end(loop)

this will be called just after backward is called (for each batch)

on_event(loop, event)

this will be called whenever an event is fired


CallbackManager

class CallbackManager(*callbacks)

Manages multiple callbacks and calls them in order.

on_registration(loop)

this will be called when the loop sets up the callbacks before any training starts.

on_stage_start(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.

on_stage_end(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.

on_epoch_start(loop)

This will be called on when each epoch starts.

on_epoch_end(loop)

This will be called on when each epoch ends.

on_batch_start(loop)

this will be called before the batch is passed to the model on each stage

on_batch_end(loop)

this will be called after each batch is proccessed (for each stage)

on_loss_start(loop)

this will be called for each batch before loss is calculated (for each stage)

on_loss_end(loop)

this will be called for each batch after the loss is calculated (for each stage)

on_step_start(loop)

this will be called for before the optimizer step (for training stage only)

on_step_end(loop)

this will be called after the optimizer step (for training stage only)

on_metric_start(loop)

this will be called just before metrics are calculated (for each batch)

on_metric_end(loop)

this will be called just after are metrics calculated (for each batch)

on_backward_start(loop)

this will be called just before backward is called on the loss (for each batch)

on_backward_end(loop)

this will be called just after backward is called (for each batch)

on_event(loop, event)

this will be called whenever an event is fired


Checkpoint

class Checkpoint(model_dir, event_types=(<class 'hearth.events.Improvement'>, ), prepare_model=None, field=None, stage=None, save_history=True, save_optimizer=True)

This callback saves checkpoints on certain events.

Note

only models derived from hearth.modules.BaseModule are currently supported.

Parameters
  • model_dir (str) – directory to save model checkpoint in. If the directory does not exist it will be created on registration.

  • event_types (Sequence[Type[MonitoringEvent]]) – Types of monitoring events to checkpoint on. Default is (hearth.events.Improvement, )

  • prepare_model (Optional[Callable[[BaseModule], BaseModule]]) – Optional callable which accepts and returns a BaseModule for preparing the model to be saved. This function will always recieve a copy of the model on the loop just for safety. Defaults to None.

  • field (Optional[str]) – If provided only save on events where field matches this field.

  • stage (Optional[str]) – If provided only save on events where stage matches this stage.

  • save_history (bool) – if True save the loop history at this step to the model dir. Defaults to True

  • save_optimizer (bool) – if True save the optimizer state dict to the model dir. Defaults to True.

Active On:
  • registration

  • event

  • epoch_end

Events Listened For:
Events Emitted:
  • hearth.events.ModelSaved

Accesses Loop Attributes:
  • model

  • history (if save_history is True)

  • optimizer (if save_optimizer is True)

Accesses Event Attributes:
  • field

  • stage

model_dir: str
event_types: Sequence[Type[hearth.events.MonitoringEvent]] = (<class 'hearth.events.Improvement'>,)
prepare_model: Optional[Callable[[hearth.modules.base.BaseModule], hearth.modules.base.BaseModule]] = None
field: Optional[str] = None
stage: Optional[str] = None
save_history: bool = True
save_optimizer: bool = True
on_registration(loop)

this will be called when the loop sets up the callbacks before any training starts.

save_checkpoint(loop)
on_epoch_end(loop)

This will be called on when each epoch ends.

on_event(loop, event)

this will be called whenever an event is fired


ClipGradNorm

class ClipGradNorm(max_norm, norm_type=2)

clips gradient norm of all trainable paramters of the loop.model on backward end.

Parameters
  • max_norm (float) – max norm of the gradients.

  • norm_type (Union[float, int]) – type of the used p-norm. Can be 'inf' for infinity norm.

max_norm: float
norm_type: Union[float, int] = 2
on_backward_end(loop)

this will be called just after backward is called (for each batch)


ClipGradValue

class ClipGradValue(clip_value)

clips gradient of all trainable paramters of the loop.model on backward end.

Parameters

clip_value (float) – value to clip at.

clip_value: float
on_backward_end(loop)

this will be called just after backward is called (for each batch)


CosineAnnealingLRCallback

class CosineAnnealingLRCallback(T_max, eta_min=0.0, start_epoch=0, end_epoch=None)

Set the learning rate of each parameter group using a cosine annealing schedule.

Parameters
  • T_max (int) – Maximum number of iterations.

  • eta_min (float) – Minimum learning rate. Default: 0.

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None.

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import CosineAnnealingLRCallback
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ...
>>> callback = CosineAnnealingLRCallback(T_max=4, eta_min=.0001)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 6 epochs
>>> loop(train, val, epochs=7)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.0853}
2 {'group0': 0.0500}
3 {'group0': 0.0147}
4 {'group0': 0.0001}
5 {'group0': 0.0147}
6 {'group0': 0.0500}
T_max: int
eta_min: float = 0.0
end_epoch: Optional[int] = None
start_epoch: int = 0

EarlyStopping

class EarlyStopping(patience=5, event_types=(<class 'hearth.events.Stagnation'>, ), field=None, stage=None)

This callback stops the training loop early if it recieves a monitoring event for enough steps.

Parameters
  • patience (int) – wait for this many steps (epochs) before triggering stopping.

  • event_types (Sequence[Type[MonitoringEvent]]) – Types of monitoring events to checkpoint on. Default is (hearth.events.Stagnation, )

  • prepare_model – Optional callable which accepts and returns a BaseModule for preparing the model to be saved. This function will always recieve a copy of the model on the loop just for safety. Defaults to None.

  • field (Optional[str]) – If provided only stop on events where field matches this field.

  • stage (Optional[str]) – If provided only stop on events where stage matches this stage.

Active On:
  • event

  • epoch_end

Events Listened For:
Events Emitted:
Accesses Loop Attributes:
  • should_stop

Modifies Loop Attributes:
  • should_stop

Accesses Event Attributes:
  • field

  • stage

  • steps

patience: int = 5
event_types: Sequence[Type[hearth.events.MonitoringEvent]] = (<class 'hearth.events.Stagnation'>,)
field: Optional[str] = None
stage: Optional[str] = None
on_epoch_end(loop)

This will be called on when each epoch ends.

on_event(loop, event)

this will be called whenever an event is fired


ExponentialLRCallback

class ExponentialLRCallback(gamma, start_epoch=0, end_epoch=None)

Decays the learning rate of each parameter group by gamma every epoch.

Parameters
  • gamma (float) – Multiplicative factor of learning rate decay. Defaults to 0.1.

  • start_epoch (int) – this callback will start at this epoch. Defaults to 0

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None.

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import ExponentialLRCallback
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ...
>>> callback = ExponentialLRCallback(gamma=.8)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 5 epochs
>>> loop(train, val, epochs=5)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.08}
2 {'group0': 0.064}
3 {'group0': 0.0512}
4 {'group0': 0.04096}
gamma: float
end_epoch: Optional[int] = None
start_epoch: int = 0

FineTuneCallback

class FineTuneCallback(start_epoch, unbottle_every=1, decay=2.6, max_depth=- 1)

This callback gradually unbottles blocks and adds them to the optimizer decaying learning rate by depth.

Note

only models derived from hearth.modules.BaseModule and optimizers derived from hearth.modules.LazyOptimizer currently supported.

Parameters
  • start_epoch (int) – start unbottling blocks at this epoch.

  • unbottle_every (int) – number of epochs to wait between unbottles.

  • decay (float) – lr for each block will be base_lr/ (decay^depth). Defaults to 2.6 (as in ULMfit).

  • max_depth (int) – maximum depth (in blocks) to unbottle. If -1 then keep unbottling until all blocks are trainable. Defaults to -1.

Active On:
  • registration

  • epoch_start

Events Emitted:
Accesses Loop Attributes:
  • model

  • optimizer

start_epoch: int
unbottle_every: int = 1
decay: float = 2.6
max_depth: int = -1
on_registration(loop)

this will be called when the loop sets up the callbacks before any training starts.

on_epoch_start(loop)

This will be called on when each epoch starts.


History

class History(*history)

This callback tracks history across epochs.

Note

this callback is auto-registered hearth.loop.Loop by default so you don’t need to list it in callbacks. It should be accessable at loop.history

Active On:
  • epoch_start

  • stage_end

  • epoch_end

Accesses Loop Attributes:
  • loss

  • metrics

  • epoch

  • optimizer

  • stage

classmethod load(model_dir)

load this history object from the model_dir

Return type

History

property current_step

the current step if tracking is in the middle of an epoch.

property last_epoch: int

the last epoch in the complete history

Return type

int

on_epoch_start(loop)

This will be called on when each epoch starts.

on_stage_end(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.

on_epoch_end(loop)

This will be called on when each epoch ends.

save(model_dir)

save this history to a file history.json in model_dir`.


ImprovementMonitor

class ImprovementMonitor(field='loss', improvement_on='lt', stage='val', stagnant_after=1)

This callback monitors a metric or loss on the specified stage for improvement and stagnation.

When improvement or stagnation is detected on the metric it emits hearth.events.Improvement and hearth.events.Stagnation events which can be used in other callbacks.

Parameters
  • field (str) – the field to monitor on the loop… may be a dotted path string for nested objects. Defaults to ‘loss’.

  • improvement_on (Literal[‘gt’, ‘lt’]) – string operator specifier ie (lt is less_than and gt is greater than). Defaults to ‘lt’ (for loss measurement).

  • stage (str) – named stage to measure improvement on. this should correspond to stages on your loop obviously. Defaults to ‘val’.

  • stagnant_after (int) – wait this number of steps before you start issuing stagnation events. Defaults to 1.

Active On:
  • stage_end

Events Emitted:
Accesses Loop Attributes:
  • metric or loss

  • stage

  • epoch

field: str = 'loss'
improvement_on: Literal['gt', 'lt'] = 'lt'
stage: str = 'val'
stagnant_after: int = 1
on_stage_end(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.


LambdaLRCallback

class LambdaLRCallback(lr_lambda, start_epoch=0, end_epoch=None)

Sets the learning rate of each parameter group to the initial lr times a given function.

Parameters
  • lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.

  • start_epoch (int) – this callback will start at this epoch. Defaults to 0

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import LambdaLRCallback
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ... we'll stop early at epoch 3
>>> callback = LambdaLRCallback(lambda x: x**2, end_epoch=3)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 5 epochs
>>> loop(train, val, epochs=5)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.1}
2 {'group0': 0.4}
3 {'group0': 0.9}
4 {'group0': 0.9}
lr_lambda: Union[Callable, List[Callable]]
end_epoch: Optional[int] = None
start_epoch: int = 0

MultiStepLRCallback

class MultiStepLRCallback(milestones, gamma=0.1, start_epoch=0, end_epoch=None)

Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

Parameters
  • milestones (list) – List of epoch indices. Must be increasing. Also note that these values are epochs are from the perspective of the scheduler. so they will start when the scheduler starts not nessisarily based on the epoch in loop.

  • gamma (float) – Multiplicative factor of learning rate decay. Defaults to 0.1.

  • start_epoch (int) – this callback will start at this epoch. Defaults to 0

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import MultiStepLRCallback
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ...
>>> callback = MultiStepLRCallback(milestones=[1, 3], gamma=.9)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 6 epochs
>>> loop(train, val, epochs=6)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.09}
2 {'group0': 0.09}
3 {'group0': 0.081}
4 {'group0': 0.081}
5 {'group0': 0.081}
milestones: List[int]
gamma: float = 0.1
end_epoch: Optional[int] = None
start_epoch: int = 0

MultiplicativeLRCallback

class MultiplicativeLRCallback(lr_lambda, start_epoch=0, end_epoch=None)

Multiply the learning rate of each parameter group by the factor given in the specified function.

Parameters
  • lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups.

  • start_epoch (int) – this callback will start at this epoch. Defaults to 0

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import MultiplicativeLRCallback
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ... we'll start the scheduler rolling at epoch2
>>> callback = MultiplicativeLRCallback(lambda x: .8, start_epoch=2)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 6 epochs
>>> loop(train, val, epochs=6)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.1}
2 {'group0': 0.1}
3 {'group0': 0.08}
4 {'group0': 0.064}
5 {'group0': 0.0512}
lr_lambda: Union[Callable, List[Callable]]
end_epoch: Optional[int] = None
start_epoch: int = 0

PrintLogger

class PrintLogger(batch_format='epoch: {loop.epoch} stage: [{loop.stage}] batch: {loop.batches_seen}/{loop.n_batches} loss: {loop.loss:0.4f}', metric_format=' {loop.metric:0.4f}', epoch_delim='-', epoch_delim_width=80)

a very simple logging callback that just prints stuff to sdout.

Parameters
  • batch_format (str) – format string which will printed (single line) for each batch and will passed a single argument loop. Defaults to hearth.callbacks.logging.DEFAULT_BATCH_FMT.

  • epoch_delim (str) – single char delimiter that will be used to seperate epochs. Defaults to -.

  • epoch_delim_width (int) – width of epoch delimiter. Defaults to 80.

Example

>>> import torch
>>> from torch import nn
>>> _ = torch.manual_seed(0)
>>> from torch.utils.data import TensorDataset, DataLoader
>>> from hearth.loop import Loop
>>> from hearth.callbacks import PrintLogger
>>> from hearth.metrics import BinaryAccuracy
>>>
>>> train = TensorDataset(torch.normal(0, 2, size=(5000, 64)),
...                       torch.randint(2, size=(5000, 1))*1.0)
>>> val = TensorDataset(torch.normal(0, 2, size=(3000, 64)),
...                     torch.randint(2, size=(3000, 1))*1.0)
>>>
>>> train_batches = DataLoader(train, batch_size=32, shuffle=True, drop_last=False)
>>> val_batches = DataLoader(val, batch_size=32, shuffle=True, drop_last=False)
>>>
>>> model = nn.Sequential(nn.Linear(64, 128), nn.ReLU(), nn.Linear(128, 1), nn.Sigmoid())
>>>
>>> loop = Loop(model=model,
...         optimizer=torch.optim.AdamW(model.parameters(), lr=0.001),
...        loss_fn = nn.BCELoss(),
...        metrics = BinaryAccuracy(),
...        callbacks= [PrintLogger()]
...       )
>>> loop(train_batches, val_batches, 2) 
epoch: 0 stage: [train] batch: 157/157 loss: 0.7036 metric: 0.5054
epoch: 0 stage: [val] batch: 94/94 loss: 0.7036 metric: 0.4933
--------------------------------------------------------------------------------
epoch: 1 stage: [train] batch: 157/157 loss: 0.6799 metric: 0.5660
epoch: 1 stage: [val] batch: 94/94 loss: 0.7097 metric: 0.4860
--------------------------------------------------------------------------------
batch_format: str = 'epoch: {loop.epoch} stage: [{loop.stage}] batch: {loop.batches_seen}/{loop.n_batches} loss: {loop.loss:0.4f}'
metric_format: str = ' {loop.metric:0.4f}'
epoch_delim: str = '-'
epoch_delim_width: int = 80
print_msg(msg)
on_registration(loop)

this will be called when the loop sets up the callbacks before any training starts.

get_batch_msg(loop)
Return type

str

on_epoch_end(loop)

This will be called on when each epoch ends.

on_batch_start(loop)

this will be called before the batch is passed to the model on each stage

on_batch_end(loop)

this will be called after each batch is proccessed (for each stage)

on_stage_end(loop)

This will be called on the start of a stage.

an identifier such as train, val etc will be accessable at loop.stage.

on_event(loop, event)

this will be called whenever an event is fired


ReduceLROnPlateauCallback

class ReduceLROnPlateauCallback(field='loss', mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0.0, eps=1e-08, start_epoch=0, end_epoch=None)

Reduce learning rate when the metric specified by field on the loop has stopped improving.

Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ number of epochs, the learning rate is reduced.

Parameters
  • field (str) – the name of the field to access on the loop, may be dotted path. Defaults to ‘loss’.

  • mode (Literal[‘min’, ‘max’]) – if the metric is being minimized or maximized . Default: ‘min’.

  • factor (float) – Factor by which the learning rate will be reduced. Default: 0.1.

  • patience (int) – number of stagnant epochs to wait before reducting lr. Defaults to 10.

  • threshold (float) – Threshold for measuring the new optimum to only focus on significant changes. Default: 1e-4.

  • threshold_mode (Literal[‘rel’, ‘abs’]) – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.

  • cooldown (int) – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.

  • min_lr (float) – lower bound on the learning rate of all param groups or each group respectively. Default: 0.

  • eps (float) – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.

  • start_epoch (int) – this callback will start at this epoch. Defaults to 0

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import ReduceLROnPlateauCallback
>>> _ = torch.manual_seed(0)
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ...
>>> callback = ReduceLROnPlateauCallback(field='loss', patience=1)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 6 epochs
>>> loop(train, val, epochs=7)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.1}
2 {'group0': 0.1}
3 {'group0': 0.01}
4 {'group0': 0.01}
5 {'group0': 0.001}
6 {'group0': 0.001}
field: str = 'loss'
mode: Literal['min', 'max'] = 'min'
factor: float = 0.1
patience: int = 10
threshold: float = 0.0001
threshold_mode: Literal['rel', 'abs'] = 'rel'
cooldown: int = 0
min_lr: float = 0.0
eps: float = 1e-08
end_epoch: Optional[int] = None
on_epoch_end(loop)

This will be called on when each epoch ends.

start_epoch: int = 0

StepLRCallback

class StepLRCallback(step_size, gamma=0.1, start_epoch=0, end_epoch=None)

Decays the learning rate of each parameter group by gamma every step_size epochs. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler.

Parameters
  • step_size (int) – Period of learning rate decay.

  • gamma (float) – Multiplicative factor of learning rate decay. Defaults to 0.1.

  • start_epoch (int) – this callback will start at this epoch. Defaults to 0

  • end_epoch (Optional[int]) – : this callback will stop at this epoch. Defaults to None

Example

>>> import torch
>>> from torch import nn
>>> from torch.utils.data import TensorDataset, DataLoader
>>>
>>> from hearth.loop import Loop
>>> from hearth.callbacks import StepLRCallback
>>>
>>> # make fakey train data and model
>>> x, y = torch.rand(130, 5), torch.rand(130, 1).round()
>>> train = DataLoader(TensorDataset(x[:100], y[:100]), batch_size=16)
>>> val = DataLoader(TensorDataset(x[-30:], y[-30:]), batch_size=16)
>>> model = nn.Sequential(nn.Linear(5, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid())
>>>
>>>
>>> # make callback ...
>>> callback = StepLRCallback(step_size=2, gamma=.9)
>>> # setup the loop
>>> loop = Loop(model,
...             optimizer=torch.optim.AdamW(model.parameters(), lr=0.1),
...             loss_fn = nn.BCELoss(),
...             callbacks = [callback]
...            )
>>>
>>> # run for 6 epochs
>>> loop(train, val, epochs=6)
>>> for row in loop.history:
...     print(row.epoch, row.lrs)
0 {'group0': 0.1}
1 {'group0': 0.1}
2 {'group0': 0.09}
3 {'group0': 0.09}
4 {'group0': 0.081}
5 {'group0': 0.081}
step_size: int
gamma: float = 0.1
end_epoch: Optional[int] = None
start_epoch: int = 0