ddf.ant_link package
Submodules
ddf.ant_link.ant_link module
- class ddf.ant_link.ant_link.AntLink(id: ddf.ddf.Id)[source]
Bases:
ABC
- backpropagate(node: Node, gradient: ndarray, losstype: LossChannel) None [source]
- generate_gradient_messages(node: Node, gradient_storage: GradientStorage) list[GradientMessage] [source]
- optimize_step(gradient_storage: GradientStorage, object_id: ObjectId, exchange_vector: ExchangeVector) None | ExchangeVector [source]
Return updated values for state_vec and state_cov.
- class ddf.ant_link.ant_link.GradientSubscription(loss_channel: ddf.ant_link.loss_channel.LossChannel, solver: collections.abc.Callable[[numpy.ndarray, ddf.information.ExchangeVector, dict], tuple[ddf.information.ExchangeVector, dict]], solver_state: dict)[source]
Bases:
object
- loss_channel: LossChannel
- solver: Callable[[ndarray, ExchangeVector, dict], tuple[ExchangeVector, dict]]
- solver_state: dict
- ddf.ant_link.ant_link.Rprop_step(grad: ndarray, exchange_vector: ExchangeVector, delta_0: ndarray, delta_min: ndarray, delta_max: ndarray, solver_dict: dict, lb: ndarray = array([], dtype=float64), ub: ndarray = array([], dtype=float64)) tuple[ExchangeVector, dict] [source]
Calculate a gradient descent step.
Calculate a gradient descent step in the device parameters while controlling the learning rate with the iRProp-algorithm.
Respects the provided parameter bounds.
See C. Igel and Michael H¨usken. Improving the rprop learning algorithm. In Proceedings of the Second International Symposium on Neural Computation, NC2000, 2000.
- Parameters:
grad (1D nd array) – loss gradient
exchange_vector (ExchangeVector) – current parameter vector
delta_0 (1D nd array) – initial step size
delta_min (1D nd array) – minimum step size
delta_max (1D nd array) – maximum step size
solver_dict (dict) – dict for transfering temporary variables form iteration to iteration
lb (1D ndarray) – parameterlower bound vector
ub (1D ndarray) – parameter upper bound vector
- Returns:
new parameter vector after update dict: the solver_dict to pass in the next iteration
- Return type:
1D ndarray
ddf.ant_link.external module
- class ddf.ant_link.external.AntLinkExternal(id: ~ddf.ddf.Id, update_counter: int, gradient_threshold: float = 1e-06, max_gradients: dict[~ddf.ant_link.loss_channel.LossChannel, ~numpy.ndarray] = <factory>)[source]
Bases:
AntLink
Refer to a different ObjectId on a different Node.
- generate_gradient_messages(node: Node, gradient_storage: GradientStorage) list[GradientMessage] [source]
- generate_info_message(node: Node, state: ExchangeVector) InfoMessage [source]
Generate an info message.
- gradient_threshold: float = 1e-06
- max_gradients: dict[LossChannel, ndarray]
- update_counter: int
ddf.ant_link.internal module
ddf.ant_link.local module
- class ddf.ant_link.local.AntLinkLocal(id: Id, gradient_subscriptions: list[GradientSubscription])[source]
Bases:
AntLink
Refer to the same ObjectId on the same Node.
- gradient_subscriptions: list[GradientSubscription]
- optimize_step(gradient_storage: GradientStorage, object_id: ObjectId, exchange_vector: ExchangeVector) None | ExchangeVector [source]
Return updated values for state_vec and state_cov.
ddf.ant_link.loss_channel module
- class ddf.ant_link.loss_channel.FuturePastProbe(*values)[source]
Bases:
Enum
- FUTURE = 0
- PAST = 1
- PROBE = 2
- class ddf.ant_link.loss_channel.LossChannel(mode: ddf.ant_link.loss_channel.FuturePastProbe, lossref: uuid.UUID | None, name: str)[source]
Bases:
object
- classmethod from_string(loss_channel: str) LossChannel [source]
- lossref: UUID | None
- mode: FuturePastProbe
- name: str