Jump to content

Biconnected component

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Anurag.x.singh (talk | contribs) at 05:31, 26 December 2014 (An External link of C implementation of Biconnected Components). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
An example graph with biconnected components marked
Each color corresponds to a biconnected component. Multi-colored vertices are cut vertices, and thus belong to multiple biconnected components.

In graph theory, a biconnected component (or 2-connected component) is a maximal biconnected subgraph. Any connected graph decomposes into a tree of biconnected components called the block tree of the graph. The blocks are attached to each other at shared vertices called cut vertices or articulation points. Specifically, a cut vertex is any vertex whose removal increases the number of connected components.

Algorithm

The classic sequential algorithm for computing biconnected components in a connected undirected graph is due to John Hopcroft and Robert Tarjan (1973).[1] It runs in linear time, and is based on depth-first search. This algorithm is also outlined as Problem 22-2 of Introduction to Algorithms (both 2nd and 3rd editions).

The idea is to run a depth-first search while maintaining the following information:

  1. the depth of each vertex in the depth-first-search tree (once it gets visited), and
  2. for each vertex v, the lowest depth of neighbors of all descendants of v in the depth-first-search tree, called the lowpoint.

The depth is standard to maintain during a depth-first search. The lowpoint of v can be computed after visiting all descendants of v (i.e., just before v gets popped off the depth-first-search stack) as the minimum of the depth of v, the depth of all neighbors of v (other than the parent of v in the depth-first-search tree) and the lowpoint of all children of v in the depth-first-search tree.

The key fact is that a nonroot vertex v is a cut vertex (or articulation point) separating two biconnected components if and only if there is a child y of v such that lowpoint(y) ≥ depth(v). This property can be tested once the depth-first search returned from every child of v (i.e., just before v gets popped off the depth-first-search stack), and if true, v separates the graph into different biconnected components. This can be represented by computing one biconnected component out of every such y (a component which contains y will contain the subtree of y, plus v), and then erasing the subtree of y from the tree.

The root vertex must be handled separately: it is a cut vertex if and only if it has at least two children. Thus, it suffices to simply build one component out of each child subtree of the root (including the root).

A non-recursive example implementation in C++11 is

// Graph is a class with the following prototype
// class Graph{
//	range_of_arc_type out(int vertex)const; // Returns a range of all outgoing arcs.
//	int head(arc_type arc)const;            // Maps an arc onto its head.
//	int node_count()const;                  // Counts the number of nodes in the graph.
// }
template<class Graph>
std::vector<int> find_all_articulation_points(const Graph&g){
	// The empty graph contains no articulation points
	if(g.node_count() == 0)
		return {};

	typedef typename std::decay<decltype(std::begin(g.out(0)))>::type Iter;

	std::vector<bool>
		visited(g.node_count(), false),
		is_articulation_point(g.node_count(), false);
	std::vector<int>
		parent(g.node_count(), -1),
		depth(g.node_count(), std::numeric_limits<int>::max()),
		min_succ_depth(g.node_count(), std::numeric_limits<int>::max());
	std::vector<Iter>next_out_arc(g.node_count());

	const int root = 0;

	std::stack<int>s;
	s.push(root);
	depth[root] = 0;

	while(!s.empty()){
		int x = s.top();
		s.pop();

		if(!visited[x]){
			// x is visited for the first time.
			visited[x] = true;
			next_out_arc[x] = std::begin(g.out(x));
			min_succ_depth[x] = depth[x];
		}else{
			// A child of x has been fully processed, continue with the next child.
			auto xy = *next_out_arc[x];
			auto y = g.head(xy);
			if(min_succ_depth[y] >= depth[x] && !is_articulation_point[x] && x != root){
				// As removing x would disconnect y and parent[x] we know that x is articulation point.
				is_articulation_point[x] = true;
				articulation_point_list.push_back(x);
			}
			min_succ_depth[x] = std::min(min_succ_depth[x], min_succ_depth[y]);		
			++next_out_arc[x];		
		}

		auto end = std::end(g.out(x));
		while(next_out_arc[x] != end){
			auto xy = *next_out_arc[x];
			auto y = g.head(xy);
			if(visited[y]){
				if(parent[x] != y)
					min_succ_depth[x] = std::min(min_succ_depth[x], depth[y]);
				++next_out_arc[x];
			}else{
				auto xy = *next_out_arc[x];
				auto y = g.head(xy);
				s.push(x); // push x so that it is visited a second time on the way up
				s.push(y);
				parent[y] = x;
				depth[y] = depth[x]+1;
				break;
			}
		}		
	}

	int root_child_count = 0;
	for(auto xy:g.out(root)){
		int y = g.head(xy);		
		if(parent[y] == root)
			++root_child_count;
	}
	
	if(root_child_count >= 2)
		articulation_point_list.push_back(root);
	return std::move(articulation_point_list);
}

Other algorithms

A simple alternative to the above algorithm uses chain decompositions, which are special ear decompositions depending on DFS-trees.[2] Chain decompositions can be computed in linear time by this traversing rule. Let C be a chain decomposition of G. Then G is 2-vertex-connected if and only if G has minimum degree 2 and C1 is the only cycle in C. This gives immediately a linear-time 2-connectivity test and can be extended to list all cut vertices of G in linear time using the following statement: A vertex v in a connected graph G (with minimum degree 2) is a cut vertex if and only if v is incident to a bridge or v is the first vertex of a cycle in C - C1. The list of cut vertices can be used to create the block-cut tree of G in linear time.

In the online version of the problem, vertices and edges are added (but not removed) dynamically, and a data structure must maintain the biconnected components. Jeffery Westbrook and Robert Tarjan (1992) [3] developed an efficient data structure for this problem based on disjoint-set data structures. Specifically, it processes n vertex additions and m edge additions in O(m α(mn)) total time, where α is the inverse Ackermann function. This time bound is proved to be optimal.

Uzi Vishkin and Robert Tarjan (1985) [4] designed a parallel algorithm on CRCW PRAM that runs in O(log n) time with n + m processors. Guojing Cong and David A. Bader (2005) [5] developed an algorithm that achieves a speedup of 5 with 12 processors on SMPs. Speedups exceeding 30 based on the original Tarjan-Vishkin algorithm were reported by James A. Edwards and Uzi Vishkin (2012).[6]

See also

Notes

  1. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1145/362248.362272, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1145/362248.362272 instead.
  2. ^ Schmidt, Jens M. (2013), "A Simple Test on 2-Vertex- and 2-Edge-Connectivity", Information Processing Letters, 113 (7): 241–244, doi:10.1016/j.ipl.2013.01.016.
  3. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/BF01758773, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/BF01758773 instead.
  4. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1137/0214061, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1137/0214061 instead.
  5. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1109/IPDPS.2005.100, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1109/IPDPS.2005.100 instead.
  6. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1145/2141702.2141714, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1145/2141702.2141714 instead.

References