Event Delivery Manager

The EventDeliveryManager coordinates the delivery of events (e.g., spikes, data) between nodes in NEST simulations, managing buffers and routing events across MPI processes for distributed computing. It handles timing through moduli-based time slicing, optimizes communication via send/receive buffers, and ensures accurate event delivery even for off-grid spikes and secondary events like data updates. This manager is essential for maintaining simulation accuracy and scalability, particularly in large-scale parallel simulations with complex event routing requirements.

class EventDeliveryManager : public nest::ManagerInterface

Public Functions

virtual void initialize(const bool) override

Prepare manager for operation.

After this method has completed, the manager should be completely initialized and “ready for action”.

See also

finalize()

Note

Initialization of any given manager may depend on other managers having been initialized before. KernelManager::initialize() is responsible for calling the initialization routines on the specific managers in correct order.

Parameters:

adjust_number_of_threads_or_rng_only – Pass true if calling from kernel_manager::change_number_of_threads() or RandomManager::get_status() to limit operations to those necessary for thread adjustment or switch or re-seeding of RNG.

virtual void finalize(const bool) override

Take down manager after operation.

After this method has completed, all dynamic data structures created by the manager shall be deallocated and containers emptied. Plain variables need not be reset.

See also

initialize()

Note

Finalization of any given manager may depend on other managers not having been finalized yet. KernelManager::finalize() is responsible for calling the initialization routines on the specific managers in correct order, i.e., the opposite order of initialize() calls.

Parameters:

adjust_number_of_threads_or_rng_only – Pass true if calling from kernel_manager::change_number_of_threads() to limit operations to those necessary for thread adjustment.

virtual void set_status(const DictionaryDatum&) override

Set the status of the manager.

See also

get_status()

virtual void get_status(DictionaryDatum&) override

Retrieve the status of the manager.

See also

set_status()

Note

This would ideally be a const function. However, some managers delay the update of internal variables up to the point where they are needed (e.g., before reporting their values to the user, or before simulate is called). An example for this pattern is the call to update_delay_extrema_() right at the beginning of ConnectionManager::get_status().

template<class EventT>
inline void send(Node &source, EventT &e, const long lag = 0)

Standard routine for sending events.

This method decides if the event has to be delivered locally or globally. It exists to keep a clean and unitary interface for the event sending mechanism.

See also

send_local()

Note

Only specializations of SpikeEvent send remotely. A specialization for DSSpikeEvent exists to avoid that these events are sent to remote processes.

inline void send_secondary(Node &source, SecondaryEvent &e)

Send a secondary event remote.

void send_local(size_t t, Node &source, Event &e)

Send event e to all targets of node source on thread t.

inline void send_remote(size_t tid, SpikeEvent&, const long lag = 0)

Add node ID of event sender to the spike_register.

An event sent through this method will remain in the queue until the network time has advanced by min_delay_ steps. After this period the buffers are collocated and sent to the partner machines.

Old documentation from network.h: Place an event in the global event queue. Add event to the queue to be delivered when it is due. At the delivery time, the target list of the sender is iterated and the event is delivered to all targets. The event is guaranteed to arrive at the receiver when all elements are updated and the system is in a synchronised (single threaded) state.

See also

send_to_targets()

inline void send_off_grid_remote(size_t tid, SpikeEvent &e, const long lag = 0)

Add node ID of event sender to the spike_register.

Store event offset with node ID. An event sent through this method will remain in the queue until the network time has advanced by min_delay_ steps. After this period the buffers are collocated and sent to the partner machines.

Old documentation from network.h: Place an event in the global event queue. Add event to the queue to be delivered when it is due. At the delivery time, the target list of the sender is iterated and the event is delivered to all targets. The event is guaranteed to arrive at the receiver when all elements are updated and the system is in a synchronised (single threaded) state.

See also

send_to_targets()

inline void send_to_node(Event &e)

Send event e directly to its target node.

This should be used only where necessary, e.g. if a node wants to reply to a *RequestEvent immediately.

inline bool get_off_grid_communication() const

return current communication style.

A result of true means off_grid, false means on_grid communication.

inline void set_off_grid_communication(bool off_grid_spiking)

set communication style to off_grid (true) or on_grid

inline size_t write_toggle() const

Return 0 for even, 1 for odd time slices.

This is useful for buffers that need to be written alternatingly by time slice. The value is given by get_slice_() % 2.

See also

read_toggle

inline size_t read_toggle() const

Return 1 - write_toggle().

This is useful for buffers that need to be read alternatingly by slice. The value is given by 1-write_toggle().

See also

write_toggle

inline long get_modulo(long d)

Return (T+d) mod max_delay.

inline long get_slice_modulo(long d)

Index to slice-based buffer.

Return ((T+d)/min_delay) % ceil(max_delay/min_delay).

void configure_spike_data_buffers()

Resize spike_register and comm_buffer to correct dimensions.

Resizes also off_grid_*_buffer_. This is done by simulate() when called for the first time. The spike buffers cannot be reconfigured later, whence neither the number of local threads or the min_delay can change after simulate() has been called. ConnectorModel::check_delay() and Network::set_status() ensure this.

void gather_spike_data()

Collocates spikes from register to MPI buffers, communicates via MPI and delivers events to targets.

void gather_target_data(const size_t tid)

Collocates presynaptic connection information, communicates via MPI and creates presynaptic connection infrastructure.

void deliver_events(const size_t tid)

Delivers events to targets.

void gather_secondary_target_data()

Collocates presynaptic connection information for secondary events (MPI buffer offsets), communicates via MPI and create presynaptic connection infrastructure for secondary events.

void update_moduli()

Update modulo table based on current time settings.

This function is called after all nodes have been updated. We can compute the value of (T+d) mod max_delay without explicit reference to the network clock, because compute_moduli_ is called whenever the network clock advances. The various modulos for all available delays are stored in a lookup-table and this table is rotated once per time slice.

Update table of fixed modulos, including slice-based.

void init_moduli()

Initialize modulo table.

TODO: can probably be private

virtual void reset_counters()

Set local spike counter to zero.

virtual void reset_timers_for_preparation()

Set time measurements for internal profiling to zero (reg.

prep.)

virtual void reset_timers_for_dynamics()

Set time measurements for internal profiling to zero (reg.

sim. dyn.)