narla.multi_agent_network
Layer
- class narla.multi_agent_network.Layer(observation_size, number_of_actions, layer_settings)[source]
Bases:
object
A Layer contains a list of Neurons
- Parameters
observation_size (
int
) – Size of the observation which the Layer will receivenumber_of_actions (
int
) – Number of actions available to the Layerlayer_settings (
LayerSettings
) – Settings for the Layer
- act(observation)[source]
Take an action based on the observation
- Parameters
observation (
Tensor
) – Observation from the Layer’s local environment- Return type
Tensor
- static build_connectivity(observation_size, number_of_neurons, local_connectivity)[source]
Build the connectivity matrix
The rows of the matrix are the outputs from the previous layer
The columns of the matrix are the neurons in the current layer
- Parameters
observation_size (
int
) – Number of inputs in the observationnumber_of_neurons (
int
) – Number of Neurons in the Layerlocal_connectivity – If
True
the connectivity matrix will be a diagonal with offsets
- Return type
Tensor
- distribute_to_neurons(**kwargs)[source]
Distribute data to the Neurons
- Parameters
kwargs – Key word arguments to be distributed
- property layer_output: torch.Tensor
Access the output of the Layer
- Return type
Tensor
- property neurons: List[narla.neurons.neuron.Neuron]
Access the Neurons from the Layer
- Return type
List
[Neuron
]
- property number_of_neurons: int
Number of Neurons in the Layer
- Return type
int
LayerSettings
- class narla.multi_agent_network.LayerSettings(neuron_settings=<factory>, number_of_neurons_per_layer=15)[source]
Bases:
narla.settings.base_settings.BaseSettings
- neuron_settings: narla.neurons.neuron_settings.NeuronSettings
- number_of_neurons_per_layer: int = 15
Number of neurons per layer in the network (the last layer always has only one neuron)
MultiAgentNetwork
- class narla.multi_agent_network.MultiAgentNetwork(observation_size, number_of_actions, network_settings)[source]
Bases:
object
A MultiAgentNetwork contains a list of Layers
- Parameters
observation_size (
int
) – Size of the observation which the MultiAgentNetwork will receivenumber_of_actions (
int
) – Number of actions available to the MultiAgentNetworknetwork_settings (
MultiAgentNetworkSettings
) – Settings for the MultiAgentNetwork
- act(observation)[source]
Take an action based on the observation
- Parameters
observation (
Tensor
) – Observation from the MultiAgentNetwork environment- Return type
Tensor
- distribute_to_layers(**kwargs)[source]
Distribute data to the Layers
- Parameters
kwargs – Key word arguments to be distributed
- property history: narla.history.history.History
Access the History of the MultiAgentNetwork
- Return type
- property layers: List[narla.multi_agent_network.layer.Layer]
Access the Layers of the MultiAgentNetwork
- Return type
List
[Layer
]
MultiAgentNetworkSettings
- class narla.multi_agent_network.MultiAgentNetworkSettings(layer_settings=<factory>, local_connectivity=True, reward_types=<factory>, number_of_layers=3)[source]
Bases:
narla.settings.base_settings.BaseSettings
- layer_settings: narla.multi_agent_network.layer_settings.LayerSettings
Settings for the Layers in the MultiAgentNetwork
- local_connectivity: bool = True
If
True
Neurons will only be connected to nearby Neurons
- number_of_layers: int = 3
Total number of layers to use in the network
- reward_types: List[narla.rewards.reward_types.RewardTypes]
Reward types to be used by the Neurons for learning