9d56361641a64ff73ac630812ecd4964eedbc7aa,gat/graph_attention_layer.py,GraphAttention,call,#GraphAttention#,72

Before Change


            combination_slices = tf.unstack(K.reshape(combinations, (B, -1, 2 * self.F_)))
            output_features = []
            for slice in combination_slices:
                dense = Dense(1)(slice)  // N x 1 (basically "a(Wh_i, Wh_j)" in the paper)
                // TODO: masking
                e_i = K.reshape(dense, (1, -1))  // 1 x N (e_i in the paper)
                softmax = K.squeeze(K.softmax(e_i))  // N (alpha_i in the paper)
                softmax_broadcast = K.transpose(K.reshape(K.tile(softmax, [self.F_]), [self.F_, -1]))
                node_features = K.sum(softmax_broadcast * linear_transf, axis=0)

After Change


        X = inputs[0]  // input graph (B x F)
        G = inputs[1]  // full graph (N x F) (this is necessary in code, but not in theory. Check section 2.2 of the paper)
        B = K.shape(X)[0]  // Get batch size at runtime
        N = K.shape(G)[0]  // Get number of nodes in the graph at runtime

        outputs = []  // Will store the outputs of each attention head (B x F" or B x KF")
        for head in range(self.attention_heads):
Italian Trulli
In pattern: SUPERPATTERN

Frequency: 3

Non-data size: 4

Instances


Project Name: danielegrattarola/keras-gat
Commit Name: 9d56361641a64ff73ac630812ecd4964eedbc7aa
Time: 2017-11-09
Author: daniele.grattarola@gmail.com
File Name: gat/graph_attention_layer.py
Class Name: GraphAttention
Method Name: call


Project Name: shenweichen/DeepCTR
Commit Name: 8182ea386e6529a1a2294d8e2d33fc040d0cbfb2
Time: 2019-07-21
Author: wcshen1994@163.com
File Name: deepctr/inputs.py
Class Name:
Method Name: get_linear_logit


Project Name: philipperemy/keras-tcn
Commit Name: 0cfe82c6beb9a28a5ff7da81b86fa0e93c388f14
Time: 2019-11-20
Author: premy@cogent.co.jp
File Name: tasks/save_reload_model.py
Class Name:
Method Name: