![]() If we study graph nodes, we can use ZORRO 6, RelEx 7, and PGM-Explainer 8. In addition, for instance-level methods, we may select different explanation targets, such as graph edges, nodes, walks, or subgraphs. In addition, if we need to study explanations for individual predictions, we need to choose instance-level methods, such as GNNExplainer, PGExplainer, and SubgraphX, etc. ![]() Existing methods, such as XGNN, explain what graph patterns lead to a certain behavior of the model. If we need the general understanding and high-level insights of the GNNs, we may choose to study model-level explanations. To explain GNNs, we first need to know what type of explanations we need. ![]() Recently, several approaches are proposed to explain GNN models, such as XGNN 3, GNNExplainer 4, PGExplainer 5, and SubgraphX. To demystify such black-boxes, we need to study the explainability of GNNs. Without reasoning the prediction procedures of GNNs, we do not understand GNN models and do not know whether the models work in our expected way, thus preventing their use in critical applications pertaining to fairness, privacy, and safety. Deep graph models are usually treated as black-boxes and lacking explainability.
0 Comments
Leave a Reply. |