metrics¶
BinaryAccuracy¶
- class BinaryAccuracy(mask_target=- 1, from_logits=False)¶
binary accruracy over possibly unnormalized values (see
from_logits
) with target masking support.- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> _ = torch.manual_seed(0) >>> from hearth.metrics import BinaryAccuracy >>> >>> >>> metric = BinaryAccuracy() >>> metric BinaryAccuracy(mask_target=-1, from_logits=False)
by default
from_logits
is false so we expect inputs to be normalized.>>> targets = torch.randint(1, size=(100,)) # (batch, 1) >>> predictions = torch.normal(0, 1, size=(100, 1)) # (batch, 1) >>> >>> metric(predictions.sigmoid(), targets) tensor(0.4400)
alternatively we could specify
from_logits=True
in which case inputs will be sigmoided for us before comparing with targets.>>> metric = BinaryAccuracy(from_logits=True) >>> metric(predictions, targets) # no sigmoid! tensor(0.4400)
this metric also supports masked targets in the case of variable length sequence data which should be batch first
(batch, time)
or(batch, time, 1)
.>>> metric = BinaryAccuracy() >>> targets = torch.tensor([[1, 1, 1, -1], [1, 1, -1, -1], [1, 1, 1, 0]]) # (batch, time) >>> predictions = torch.ones(3, 4) >>> metric(predictions, targets) tensor(0.8889)
BinaryF1¶
- class BinaryF1(eps=1e-08, dim=0, mask_target=- 1, from_logits=False)¶
binary f1 score over possibly un-normalized values (see
from_logits
) with target masking support.- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> from hearth.metrics import BinaryF1 >>> >>> inputs = torch.tensor([0.7116, 0.6470, 0.5039, 0.9953, 0.8948, 0.4229, 0.8654, 0.8108]) >>> targets = torch.tensor([0., 1., 1., 1., 0., 1., 0., 0.]) >>> >>> f1 = BinaryF1() >>> f1(inputs, targets) tensor(0.5455)
works fine with an extra dim:
>>> f1(inputs.unsqueeze(-1), targets.unsqueeze(-1)) tensor(0.5455)
use the
from_logits
option if your inputs will not be sigmoid squashed:>>> f1 = BinaryF1(from_logits=True) >>> unsigmoided = torch.log(inputs/(1-inputs)) >>> f1(unsigmoided, targets) tensor(0.5455)
for cases like variable lenght time sequences use the mask target option:
>>> f1 = BinaryF1(mask_target=-1) >>> inputs = inputs.reshape(2, 4) # (batch, sequence length) >>> # note we mask some timesteps with -1 >>> targets = torch.tensor([[0., 1., 1., -1], ... [0., 1, -1, -1]]) >>> f1(inputs, targets) tensor(0.5714)
BinaryFBeta¶
- class BinaryFBeta(beta=1, eps=1e-08, dim=0, mask_target=- 1, from_logits=False)¶
binary fbeta score over possibly un-normalized values (see
from_logits
) with target masking support.- Parameters
beta (
float
) –beta value for weighting precision and recall.
beta < 1
weights precision higher, whilebeta > 1
gives more weight to recall. Defaults to 1 (making the default thesame as f1 score).
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> from hearth.metrics import BinaryFBeta >>> >>> inputs = torch.tensor([0.7116, 0.6470, 0.5039, 0.9953, 0.8948, 0.4229, 0.8654, 0.8108]) >>> targets = torch.tensor([0., 1., 1., 1., 0., 1., 0., 0.]) >>> >>> fbeta = BinaryFBeta(beta=.5) >>> fbeta(inputs, targets) tensor(0.4687)
works fine with an extra dim:
>>> fbeta(inputs.unsqueeze(-1), targets.unsqueeze(-1)) tensor(0.4687)
use the
from_logits
option if your inputs will not be sigmoid squashed:>>> fbeta = BinaryFBeta(beta=.5, from_logits=True) >>> unsigmoided = torch.log(inputs/(1-inputs)) >>> fbeta(unsigmoided, targets) tensor(0.4687)
for cases like variable lenght time sequences use the mask target option:
>>> fbeta = BinaryFBeta(beta=.5, mask_target=-1) >>> inputs = inputs.reshape(2, 4) # (batch, sequence length) >>> # note we mask some timesteps with -1 >>> targets = torch.tensor([[0., 1., 1., -1], ... [0., 1, -1, -1]]) >>> fbeta(inputs, targets) tensor(0.5263)
BinaryPrecision¶
- class BinaryPrecision(eps=1e-08, dim=0, mask_target=- 1, from_logits=False)¶
“binary precision over possibly un-normalized values (see
from_logits
) with target masking support.- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> from hearth.metrics import BinaryPrecision >>> >>> inputs = torch.tensor([0.7116, 0.6470, 0.5039, 0.9953, 0.8948, 0.4229, 0.8654, 0.8108]) >>> targets = torch.tensor([0., 1., 1., 1., 0., 1., 0., 0.]) >>> >>> precision = BinaryPrecision() >>> precision(inputs, targets) tensor(0.4286)
works fine with an extra dim:
>>> precision(inputs.unsqueeze(-1), targets.unsqueeze(-1)) tensor(0.4286)
use the
from_logits
option if your inputs will not be sigmoid squashed:>>> precision = BinaryPrecision(from_logits=True) >>> unsigmoided = torch.log(inputs/(1-inputs)) >>> precision(unsigmoided, targets) tensor(0.4286)
for cases like variable lenght time sequences use the mask target option:
>>> precision = BinaryPrecision(mask_target=-1) >>> inputs = inputs.reshape(2, 4) # (batch, sequence length) >>> # note we mask some timesteps with -1 >>> targets = torch.tensor([[0., 1., 1., -1], ... [0., 1, -1, -1]]) >>> precision(inputs, targets) tensor(0.5000)
BinaryRecall¶
- class BinaryRecall(eps=1e-08, dim=0, mask_target=- 1, from_logits=False)¶
“binary recall over possibly un-normalized values (see
from_logits
) with target masking support.- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> from hearth.metrics import BinaryRecall >>> >>> inputs = torch.tensor([0.7116, 0.6470, 0.5039, 0.9953, 0.8948, 0.4229, 0.8654, 0.8108]) >>> targets = torch.tensor([0., 1., 1., 1., 0., 1., 0., 0.]) >>> >>> recall = BinaryRecall() >>> recall(inputs, targets) tensor(0.7500)
works fine with an extra dim:
>>> recall(inputs.unsqueeze(-1), targets.unsqueeze(-1)) tensor(0.7500)
use the
from_logits
option if your inputs will not be sigmoid squashed:>>> recall = BinaryRecall(from_logits=True) >>> unsigmoided = torch.log(inputs/(1-inputs)) >>> recall(unsigmoided, targets) tensor(0.7500)
for cases like variable lenght time sequences use the mask target option:
>>> recall = BinaryRecall(mask_target=-1) >>> inputs = inputs.reshape(2, 4) # (batch, sequence length) >>> # note we mask some timesteps with -1 >>> targets = torch.tensor([[0., 1., 1., -1], ... [0., 1, -1, -1]]) >>> recall(inputs, targets) tensor(0.6667)
CategoricalAccuracy¶
- class CategoricalAccuracy(mask_target=- 1)¶
categorical accuracy over possibly unnormalized scores given target indices.
Built to have a similar interface to nn.CrossEntropyLoss.
- Parameters
mask_target (
int
) – mask this value if seen in the targets to this value. defaults to-1
Example
>>> import torch >>> _ = torch.manual_seed(0) >>> from hearth.metrics import CategoricalAccuracy >>> >>> metric = CategoricalAccuracy() >>> metric CategoricalAccuracy(mask_target=-1)
>>> targets = torch.randint(4, size=(100,)) # (bath,) >>> predictions = torch.normal(0, 1, size=(100, 4)) # (batch, classes) >>> metric(predictions, targets) tensor(0.2200)
mask targets using the
mask_target
, particularly useful for mixed length sequence predicions….>>> targets = torch.tensor([[0, 5, 9, -1], [2, 3, -1, -1], [1, 6, 3, 4]]) # (batch, time) >>> predictions = torch.normal(0, 1, size=(3, 4, 9)) # (batch, time, classes) >>> metric(predictions, targets) tensor(0.1111)
CategoricalF1¶
- class CategoricalF1(eps=1e-08, dim=0, mask_target=- 1)¶
categorical f1 score over possibly unnormalized scores given target indices.
Built to have a somewhat similar interface to nn.CrossEntropyLoss.
- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.
Example
>>> import torch >>> from hearth.metrics import CategoricalF1 >>> >>> # here inputs is batch, feats and is not nessisarily normalized >>> inputs = torch.tensor([[-0.2956, 1.6050, 0.4113, -1.9041], ... [ 0.2095, 1.2959, -1.2466, 2.2302], ... [ 0.4702, -0.7506, 1.6751, 0.3370], ... [-0.4504, 0.5301, -1.1206, -0.5896], ... [ 0.7439, 0.4022, 0.5913, 0.1511], ... [-0.0523, -1.0082, 0.5536, -1.2748], ... [ 0.5151, -0.9396, 0.7223, -0.5500], ... [ 0.1083, 2.7311, 1.4429, 1.0640]]) >>> targets = torch.tensor([3, 1, 1, 2, 0, 3, 0, 2]) # true class indices >>> >>> f1 = CategoricalF1() >>> f1(inputs, targets) tensor(0.1667)
if you’re predicting categories for variable length sequences you can mask your targets with
mask_target
(-1 by default) predictions at those timesteps will then be excluded from the metric.>>> sequence_inputs = inputs.reshape(2, 4, 4) # (batch, timesteps, classes) >>> # note the last two timesteps in targets are masked with -1 >>> masked_targets = torch.tensor([[ 3, 1, 1, 2], ... [ 0, 3, -1, -1]]) # (batch, timesteps) >>> f1(sequence_inputs, masked_targets) tensor(0.2500)
CategoricalFBeta¶
- class CategoricalFBeta(beta=1, eps=1e-08, dim=0, mask_target=- 1)¶
categorical fbeta score over possibly unnormalized scores given target indices.
Built to have a somewhat similar interface to nn.CrossEntropyLoss.
- Parameters
beta (
float
) – beta value for weighting precision and recall.beta < 1
weights precision higher, whilebeta > 1
gives more weight to recall. Defaults to 1 (making the default the same as f1 score).mask_target (
int
) – mask targets equal to this value. defaults to-1
.
Example
>>> import torch >>> from hearth.metrics import CategoricalFBeta >>> >>> # here inputs is batch, feats and is not nessisarily normalized >>> inputs = torch.tensor([[-0.2956, 1.6050, 0.4113, -1.9041], ... [ 0.2095, 1.2959, -1.2466, 2.2302], ... [ 0.4702, -0.7506, 1.6751, 0.3370], ... [-0.4504, 0.5301, -1.1206, -0.5896], ... [ 0.7439, 0.4022, 0.5913, 0.1511], ... [-0.0523, -1.0082, 0.5536, -1.2748], ... [ 0.5151, -0.9396, 0.7223, -0.5500], ... [ 0.1083, 2.7311, 1.4429, 1.0640]]) >>> targets = torch.tensor([3, 1, 1, 2, 0, 3, 0, 2]) # true class indices >>> >>> fbeta = CategoricalFBeta(beta=.5) >>> fbeta(inputs, targets) tensor(0.2083)
if you’re predicting categories for variable length sequences you can mask your targets with
mask_target
(-1 by default) predictions at those timesteps will then be excluded from the metric.>>> sequence_inputs = inputs.reshape(2, 4, 4) # (batch, timesteps, classes) >>> # note the last two timesteps in targets are masked with -1 >>> masked_targets = torch.tensor([[ 3, 1, 1, 2], ... [ 0, 3, -1, -1]]) # (batch, timesteps) >>> fbeta(sequence_inputs, masked_targets) tensor(0.2500)
CategoricalPrecision¶
- class CategoricalPrecision(eps=1e-08, dim=0, mask_target=- 1)¶
categorical precision over possibly unnormalized scores given target indices.
Built to have a somewhat similar interface to nn.CrossEntropyLoss.
- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.
Example
>>> import torch >>> from hearth.metrics import CategoricalPrecision >>> >>> # here inputs is batch, feats and is not nessisarily normalized >>> inputs = torch.tensor([[-0.2956, 1.6050, 0.4113, -1.9041], ... [ 0.2095, 1.2959, -1.2466, 2.2302], ... [ 0.4702, -0.7506, 1.6751, 0.3370], ... [-0.4504, 0.5301, -1.1206, -0.5896], ... [ 0.7439, 0.4022, 0.5913, 0.1511], ... [-0.0523, -1.0082, 0.5536, -1.2748], ... [ 0.5151, -0.9396, 0.7223, -0.5500], ... [ 0.1083, 2.7311, 1.4429, 1.0640]]) >>> targets = torch.tensor([3, 1, 1, 2, 0, 3, 0, 2]) # true class indices >>> >>> precision = CategoricalPrecision() >>> precision(inputs, targets) tensor(0.2500)
if you’re predicting categories for variable length sequences you can mask your targets with
mask_target
(-1 by default) predictions at those timesteps will then be excluded from the metric.>>> sequence_inputs = inputs.reshape(2, 4, 4) # (batch, timesteps, classes) >>> # note the last two timesteps in targets are masked with -1 >>> masked_targets = torch.tensor([[ 3, 1, 1, 2], ... [ 0, 3, -1, -1]]) # (batch, timesteps) >>> precision(sequence_inputs, masked_targets) tensor(0.2500)
CategoricalRecall¶
- class CategoricalRecall(eps=1e-08, dim=0, mask_target=- 1)¶
categorical recall over possibly unnormalized scores given target indices.
Built to have a somewhat similar interface to nn.CrossEntropyLoss.
- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.
Example
>>> import torch >>> from hearth.metrics import CategoricalRecall >>> >>> # here inputs is batch, feats and is not nessisarily normalized >>> inputs = torch.tensor([[-0.2956, 1.6050, 0.4113, -1.9041], ... [ 0.2095, 1.2959, -1.2466, 2.2302], ... [ 0.4702, -0.7506, 1.6751, 0.3370], ... [-0.4504, 0.5301, -1.1206, -0.5896], ... [ 0.7439, 0.4022, 0.5913, 0.1511], ... [-0.0523, -1.0082, 0.5536, -1.2748], ... [ 0.5151, -0.9396, 0.7223, -0.5500], ... [ 0.1083, 2.7311, 1.4429, 1.0640]]) >>> targets = torch.tensor([3, 1, 1, 2, 0, 3, 0, 2]) # true class indices >>> >>> recall = CategoricalRecall() >>> recall(inputs, targets) tensor(0.1250)
if you’re predicting categories for variable length sequences you can mask your targets with
mask_target
(-1 by default) predictions at those timesteps will then be excluded from the metric.>>> sequence_inputs = inputs.reshape(2, 4, 4) # (batch, timesteps, classes) >>> # note the last two timesteps in targets are masked with -1 >>> masked_targets = torch.tensor([[ 3, 1, 1, 2], ... [ 0, 3, -1, -1]]) # (batch, timesteps) >>> recall(sequence_inputs, masked_targets) tensor(0.2500)
Metric¶
- class Metric¶
abstract base class for all metrics.
Note
Metrics should inherit from this method and define a forward method (for compatability with torch losses and modules)
- forward(inputs, targets, **kwargs)¶
given call this metric given an input and target and optional keyword arguments.
- Parameters
inp – the input tensor… generally some form of prediciton.
target – the target tensor.
- Return type
Tensor
- Returns
a scalar tensor
MetricStack¶
- class MetricStack(*args, **kwargs)¶
a metric stack is a keyed collection of metric functions that will are all to be called with a given set of inputs.
This is useful when you’d like to run more than 1 metric function on an output
Note
if you’re using
hearth.loop.Loop
and pass a list of metrics a metric stack will be created for you automatically.Example
>>> import torch >>> from hearth.metrics import BinaryAccuracy, BinaryF1, MetricStack >>> _ = torch.manual_seed(0) >>> >>> metrics = MetricStack(BinaryAccuracy(), BinaryF1()) >>> metrics MetricStack(BinaryAccuracy(mask_target=-1, from_logits=False), BinaryF1(eps=1e-08, dim=0, mask_target=-1, from_logits=False))
if metrics are provided as args keys will be created based on the metric names >>> list(metrics.keys()) [‘binary_accuracy’, ‘binary_f1’]
you can access the individual metric functions by key if you need to:
>>> metrics.binary_accuracy BinaryAccuracy(mask_target=-1, from_logits=False)
when calling the metric stack with inputs and targets will compute all metrics for those inputs and targets and return a keyed
TensorDict
.>>> inputs = torch.rand(10, 1) >>> targets = torch.rand(10, 1).round() >>> >>> metrics(inputs, targets) TensorDict({'binary_accuracy': tensor(0.7000), 'binary_f1': tensor(0.5714)})
if you would rather choose your own keys you can instantiate the MetricStack with keyword args like so:
>>> metrics = MetricStack(acc=BinaryAccuracy(), f1=BinaryF1()) >>> metrics(inputs, targets) TensorDict({'acc': tensor(0.7000), 'f1': tensor(0.5714)})
keyword arguments will be passed to all metric functions this is useful if you need them for some metrics and not others particularly because all hearth metrics accept variable keyword args and ignore them.
>>> def my_metric(inputs, targets, weights, **kwargs): ... return (inputs * weights).sum() / (targets * weights).sum() >>> >>> metrics = MetricStack(BinaryAccuracy(), my_metric) >>> weights = torch.normal(0, 5, size=(10,1)) >>> metrics(inputs, targets, weights=weights) TensorDict({'binary_accuracy': tensor(0.7000), 'my_metric': tensor(23.2894)})
- items()¶
- keys()¶
MultiHeadMetric¶
- class MultiHeadMetric(**kwargs)¶
a wrapper for metrics multi-output models.
Example
>>> import torch >>> from hearth.metrics import MultiHeadMetric, BinaryAccuracy, CategoricalAccuracy >>> _ = torch.manual_seed(0) >>> >>> metric = MultiHeadMetric(a=BinaryAccuracy(), b=CategoricalAccuracy()) >>> metric MultiHeadMetric(a=BinaryAccuracy(mask_target=-1, from_logits=False), b=CategoricalAccuracy(mask_target=-1))
inputs and targets should be some kind of mapping with head names matching those we specified in our metric.
output will be a
hearth.containers.TensorDict
>>> batch_size = 10 >>> inputs = {'a': torch.rand(batch_size, 1), ... 'b': torch.normal(batch_size, 1, size=(10, 4))} >>> targets = {'a': torch.rand(batch_size, 1).round(), ... 'b': torch.randint(4, size=(batch_size,))} >>> metric(inputs, targets) TensorDict({'a': tensor(0.4000), 'b': tensor(0.6000)})
PearsonCorrCoef¶
- class PearsonCorrCoef(transform_inputs=None)¶
pearson correlation coefficient with optional input transform.
- Parameters
transform_inputs (
Optional
[Literal
[‘sigmoid’, ‘tanh’]]) – one of [‘sigmoid’, ‘tanh’, None]. If populated apply this transform to inputs before computing metric. Defaults to None ( No Transform).
Example
>>> import torch >>> _ = torch.manual_seed(0) >>> from hearth.metrics import PearsonCorrCoef >>> >>> >>> metric = PearsonCorrCoef()
by default
transform_inputs=None
.>>> targets = torch.normal(0, 1, size=(10, 1)) # (batch, 1) >>> predictions = torch.normal(0, 1, size=(10, 1)) # (batch, 1) >>> metric(predictions, targets) tensor(0.3518)
also works if extra dim is removed:
>>> metric(predictions.squeeze(), targets.squeeze()) tensor(0.3518)
we can specify sigmoid transform…
>>> metric = PearsonCorrCoef(transform_inputs='sigmoid') >>> metric(torch.normal(0, 1, size=(5,)), torch.tensor([1.0, 1.0, 0.0, 1.0, 0.0])) tensor(0.3267)
or tanh transform….
>>> metric = PearsonCorrCoef(transform_inputs='sigmoid') >>> metric(torch.normal(0, 1, size=(5,)), torch.tensor([1.0, -1.0, 1.0, -1.0, 1.0])) tensor(0.2911)
- transform_inputs: Optional[Literal['sigmoid', 'tanh']] = None¶
- forward(inputs, targets, **kwargs)¶
given call this metric given an input and target and optional keyword arguments.
- Parameters
inp – the input tensor… generally some form of prediciton.
target – the target tensor.
- Return type
Tensor
- Returns
a scalar tensor
Running¶
- class Running(fn)¶
wrapper for metrics and losses for tracking running averages over batches.
- Parameters
fn (
Callable
[[Tensor
,Tensor
],Tensor
]) – a loss or metric function.
Example
>>> import torch >>> from torch import nn >>> _ = torch.manual_seed(0) >>> from hearth.metrics import Running >>> >>> running_loss = Running(nn.BCELoss()) >>> running_loss Running(BCELoss())
>>> predictions = torch.rand(10, 1, requires_grad=True) >>> targets = (torch.rand(10, 1) > .5) *1.0 >>> running_loss(predictions, targets) # this should have grad! tensor(0.5636, grad_fn=<BinaryCrossEntropyBackward0>)
>>> running_loss.average 0.5636
generate some new random inputs and run again this time with smaller batch:
>>> predictions = torch.rand(6, 1, requires_grad=True) >>> targets = (torch.rand(6, 1) > .5) *1.0 >>> running_loss(predictions, targets) tensor(1.0943, grad_fn=<BinaryCrossEntropyBackward0>)
>>> running_loss.average 0.7625999748706818
call
reset()
to reset it:>>> running_loss.reset() >>> running_loss.average 0.0
- reset()¶
reset all counters and totals on this metric.
- property average: float¶
- Return type
float
- to(device)¶
SoftBinaryPrecision¶
- class SoftBinaryPrecision(eps=1e-08, dim=0, mask_target=- 1, from_logits=False)¶
a soft version of binary precision over possibly un-normalized values (see
from_logits
) with target masking support.- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> from hearth.metrics import SoftBinaryPrecision >>> >>> inputs = torch.tensor([0.7116, 0.6470, 0.5039, 0.9953, 0.8948, 0.4229, 0.8654, 0.8108]) >>> targets = torch.tensor([0., 1., 1., 1., 0., 1., 0., 0.]) >>> >>> softprecision = SoftBinaryPrecision() >>> softprecision(inputs, targets) tensor(0.4390)
works fine with an extra dim:
>>> softprecision(inputs.unsqueeze(-1), targets.unsqueeze(-1)) tensor(0.4390)
use the
from_logits
option if your inputs will not be sigmoid squashed:>>> softprecision = SoftBinaryPrecision(from_logits=True) >>> unsigmoided = torch.log(inputs/(1-inputs)) >>> softprecision(unsigmoided, targets) tensor(0.4390)
for cases like variable lenght time sequences use the mask target option:
>>> softprecision = SoftBinaryPrecision(mask_target=-1) >>> inputs = inputs.reshape(2, 4) # (batch, sequence length) >>> # note we mask some timesteps with -1 >>> targets = torch.tensor([[0., 1., 1., -1], ... [0., 1, -1, -1]]) >>> softprecision(inputs, targets) tensor(0.4949)
SoftBinaryRecall¶
- class SoftBinaryRecall(eps=1e-08, dim=0, mask_target=- 1, from_logits=False)¶
a soft version of binary recall over possibly un-normalized values (see
from_logits
) with target masking support.- Parameters
mask_target (
int
) – mask targets equal to this value. defaults to-1
.from_logits (
bool
) – ifTrue
inputs are expected to be unnormalized and a sigmoid function will be applied before comparison to targets. defaults toFalse
.
Example
>>> import torch >>> from hearth.metrics import SoftBinaryRecall >>> >>> inputs = torch.tensor([0.7116, 0.6470, 0.5039, 0.9953, 0.8948, 0.4229, 0.8654, 0.8108]) >>> targets = torch.tensor([0., 1., 1., 1., 0., 1., 0., 0.]) >>> >>> softrecall = SoftBinaryRecall() >>> softrecall(inputs, targets) tensor(0.6423)
works fine with an extra dim:
>>> softrecall(inputs.unsqueeze(-1), targets.unsqueeze(-1)) tensor(0.6423)
use the
from_logits
option if your inputs will not be sigmoid squashed:>>> softrecall = SoftBinaryRecall(from_logits=True) >>> unsigmoided = torch.log(inputs/(1-inputs)) >>> softrecall(unsigmoided, targets) tensor(0.6423)
for cases like variable lenght time sequences use the mask target option:
>>> softrecall = SoftBinaryRecall(mask_target=-1) >>> inputs = inputs.reshape(2, 4) # (batch, sequence length) >>> # note we mask some timesteps with -1 >>> targets = torch.tensor([[0., 1., 1., -1], ... [0., 1, -1, -1]]) >>> softrecall(inputs, targets) tensor(0.5246)