Consider the problem of learning the underlying structure of a Gaussian graphical model when a variable (or a subset thereof) is corrupted by independent noise. A recent body of work has established that even for tree-structured graphical models, only partial structure recovery is possible, and algorithms have been devised to identify structures down to the (inevitable) equivalence class of the tree. It has been. Since tree graphs cannot model some real-world scenarios, we extend these results beyond trees to consider the problem of model selection under noise in non-tree-structured graphs. It is not discernible, but shows that the ambiguity is limited to equivalence classes, similar to a tree-structured graph. This limited ambiguity helps provide meaningful clustering information (even in the presence of noise). This is useful for computer and social networks, protein-protein interaction networks, and power networks. In addition, we devise an algorithm based on a new ancestry-testing method for recovering equivalence classes. We complement these results with a finite-sample guarantee of the algorithm in the high-dimensional regime.