DistributedBlock
- class DistributedBlock(block, indices_required_from_me: List[Tensor], sizes_expected_from_others: List[int], src_ranges: List[Tuple[int, int]], unique_src_nodes: List[Tensor], input_nodes: Tensor, seeds: Tensor, edge_type_names: List[str])
A wrapper around a dgl.DGLBlock object. The DGLBlock object represents all the edges incoming to the local partition. It communicates with remote partitions to implement one-shot communication and aggregation in the forward and backward passes . You should not construct DistributedBlock directly, but instead use
GraphShardManager.get_full_partition_graph()
- Parameters:
block – A DGLBlock object representing all edges incoming to the local partition
indices_required_from_me (List[Tensor]) – The local node indices required by every other partition to carry out one-hop aggregation
sizes_expected_from_others (List[int]) – The number of remote indices that we need to fetch from remote partitions to update the features of the nodes in the local partition
src_ranges (List[Tuple[int, int]]) – The global node ids of the start node and end node in each partition. Nodes in each partition have consecutive indices
unique_src_nodes (List[Tensor]) – The absolute node indices of the source nodes in each remote partition
input_nodes (Tensor) – The indices of the input nodes relative to the starting node index of the local partition The input nodes are the nodes needed to produce the output node features assuming one-hop aggregation
seeds (Tensor) – The node indices of the output nodes relative to the starting node index of the local partition
edge_type_names (List[str]) – A list of edge type names