narla.multi_agent_network

Layer

class narla.multi_agent_network.Layer(observation_size, number_of_actions, layer_settings)[source]

Bases: object

A Layer contains a list of Neurons

Parameters
  • observation_size (int) – Size of the observation which the Layer will receive

  • number_of_actions (int) – Number of actions available to the Layer

  • layer_settings (LayerSettings) – Settings for the Layer

act(observation)[source]

Take an action based on the observation

Parameters

observation (Tensor) – Observation from the Layer’s local environment

Return type

Tensor

static build_connectivity(observation_size, number_of_neurons, local_connectivity)[source]

Build the connectivity matrix

  • The rows of the matrix are the outputs from the previous layer

  • The columns of the matrix are the neurons in the current layer

Parameters
  • observation_size (int) – Number of inputs in the observation

  • number_of_neurons (int) – Number of Neurons in the Layer

  • local_connectivity – If True the connectivity matrix will be a diagonal with offsets

Return type

Tensor

distribute_to_neurons(**kwargs)[source]

Distribute data to the Neurons

Parameters

kwargs – Key word arguments to be distributed

property layer_output: torch.Tensor

Access the output of the Layer

Return type

Tensor

learn(*reward_types)[source]

Execute learning phase for Neurons

property neurons: List[narla.neurons.neuron.Neuron]

Access the Neurons from the Layer

Return type

List[Neuron]

property number_of_neurons: int

Number of Neurons in the Layer

Return type

int

LayerSettings

class narla.multi_agent_network.LayerSettings(neuron_settings=<factory>, number_of_neurons_per_layer=15)[source]

Bases: narla.settings.base_settings.BaseSettings

neuron_settings: narla.neurons.neuron_settings.NeuronSettings
number_of_neurons_per_layer: int = 15

Number of neurons per layer in the network (the last layer always has only one neuron)

MultiAgentNetwork

class narla.multi_agent_network.MultiAgentNetwork(observation_size, number_of_actions, network_settings)[source]

Bases: object

A MultiAgentNetwork contains a list of Layers

Parameters
  • observation_size (int) – Size of the observation which the MultiAgentNetwork will receive

  • number_of_actions (int) – Number of actions available to the MultiAgentNetwork

  • network_settings (MultiAgentNetworkSettings) – Settings for the MultiAgentNetwork

act(observation)[source]

Take an action based on the observation

Parameters

observation (Tensor) – Observation from the MultiAgentNetwork environment

Return type

Tensor

compute_biological_rewards()[source]

Compute BiologicalRewards and distribute to the Neurons

distribute_to_layers(**kwargs)[source]

Distribute data to the Layers

Parameters

kwargs – Key word arguments to be distributed

property history: narla.history.history.History

Access the History of the MultiAgentNetwork

Return type

History

property layers: List[narla.multi_agent_network.layer.Layer]

Access the Layers of the MultiAgentNetwork

Return type

List[Layer]

learn()[source]

Execute learning phase for Layers

record(**kwargs)[source]

Record data into the MultiAgentNetwork’s History

Parameters

kwargs – Key word arguments to be stored in the History

MultiAgentNetworkSettings

class narla.multi_agent_network.MultiAgentNetworkSettings(layer_settings=<factory>, local_connectivity=True, reward_types=<factory>, number_of_layers=3)[source]

Bases: narla.settings.base_settings.BaseSettings

layer_settings: narla.multi_agent_network.layer_settings.LayerSettings

Settings for the Layers in the MultiAgentNetwork

local_connectivity: bool = True

If True Neurons will only be connected to nearby Neurons

number_of_layers: int = 3

Total number of layers to use in the network

reward_types: List[narla.rewards.reward_types.RewardTypes]

Reward types to be used by the Neurons for learning