Towards The Solution Of NP Complete Problems

The word algorithm is the magical word in the field of computer science because the imagination of the existence of any branch of computer science like artificial intelligence, computer network, human computer interaction etc. is useless without the word algorithm. There are some problems for which no any researcher is able to design those algorithms, which have polynomial time complexity in the worst case. No any researchers have even proved that no any polynomial time algorithm can solve them in worst case. NP complete problems arise in domains like automata and language theory, sequencing and scheduling, mathematical programming, algebra and number theory, games and puzzles, optimization problems, Boolean logic, graphs, arithmetic, designing of network, sets and partitions, storage and retrieval, biology, chemistry, physics etc. In this paper such kind of the problems (non deterministic polynomial type complete problem) is being studied along with the approximation algorithm for vertex cover problem. Researchers are applying their best efforts to design new approximation algorithms for nondeterministic polynomial type problems which have comparable less time complexity and space complexity as compared to the existing approximation algorithms.

1.    Introduction
An algorithm may be defined as any computational procedure    which   takes the input as a single value or set of values and produces the output as a single value or set of values [2]. As   it   is well known fact that all problems can  not be solved in the polynomial time in the worst case. If n is the size of  the input and those algorithms which have the time complexity of O(nk) in the worst case are known as polynomial time algorithms. There are some problems which cannot be solved by   polynomial time algorithms in the worst case. For example, there are problems such as Turing's famous "Halting Problem," which  cannot be solved by any computer in the polynomial time. Tractable problems are those problems which can be solved in time O(nk) for any constant k. But those problems that require super polynomial time are known as intractable or hard problems.

 No polynomial-time algorithm exist   for an NP-complete problem and  no any researcher has yet been able to prove that no polynomial-time algorithm can exist for any one of them. Several   NP complete problems   seem similar to polynomial-time algorithms. In each of the following pairs of problems, the first problem is solvable in polynomial time but   the other is not solvable in the polynomial time.
Shortest path problems can  be solved in the polynomial time. If in a particular graph some edge has negative weight, even then we can find shortest paths from a single source in a directed graph G = (V, E) in O(VE) time. Where V represents the set of vertices and E represents the set of edges. But for finding the longest simple path between two vertices cannot be solved in the polynomial time in the worst case. So the problem of finding the longest simple path between two vertices is NP-complete. If the weight  of all the edges are unity even then it is NP-complete problem.
An Euler path   of a connected, directed graph is a path that traverses each edge of graph   exactly once, although it may visit a vertex of graph  more than once. If E represents the set of edges and V represents the set of vertices then we can determine whether a graph has an Euler tour in only O(E) time complexity. A Hamiltonian cycle of a directed graph is a  path  that traverse each and exactly every vertex of the graph exactly once . As there is no known polynomial time algorithm which can find the Hamiltonian cycle of graph  G=(V,E) in the polynomial time. So the problem of   determining whether a directed graph G=(V,E) has a Hamiltonian cycle is NP-complete problem.
2-CNF satisfiability  problem can be solved in the polynomial time  but  3-CNF satisfiability problem can not be solved in the polynomial time so 3-CNF satisfiability is NP complete problem.

2.    Related Work:  If n is the size of the input and those algorithms which have the time complexity of O(nk) in the worst case are known as polynomial time algorithms. All  problems cannot be solved by the polynomial time algorithms in the worst case.
Those problems that are "verifiable" in polynomial time are    known to be in   class NP problems. If "certificate" of a solution  is being given and  if we can verify that the certificate is correct in polynomial time in  the size of the input to the problem, then the problem is said to be in class of NP problems . Any problem in P is also in NP. It is well known fact that   if any one of the existing   NP-complete problem can be solved in polynomial time, then   every NP-complete problem can be easily solved by  polynomial-time algorithm. Most computer researchers believe that the NP-complete problems are intractable, because there are large  number of NP complete problem and researchers have applied their best efforts   to solve these problems in the polynomial time but no any researcher got the success to give the polynomial time algorithms which can give the optimum solution to such kind of the problems for such kinds of the problems.
As there is no any existing   algorithm which   can give optimum solution to  NP complete problems in polynomial time in the worst case. So researchers gave  approximation algorithm to solve such kind of    NP complete problems in polynomial time in the worst case. Approximation algorithms give the approximate solution to a particular problem    which is close to the optimal solution to the   problem. The techniques which is being used to  show a particular problem to be NP-complete  is different  from the techniques used to design and analyze algorithms because in showing a problem to be NP-complete, we are making a statement about how hard is the problem .
There are   some    problems that require time as Θ(n1000). It is well known fact   that when an algorithm which has polynomial time complexity for a problem is being discovered for a problem by a researcher, it is likely that an algorithm with the less time complexity will be discovered in the near future. Also a problem which is solvable in polynomial time in one model can  also be easily solved in polynomial time in another  model. For example, the   problems solvable in polynomial time by the serial random-access machine are also solvable in polynomial time on abstract Turing machines. Those   problems    which are solvable in   polynomial     time has also closure properties because   polynomials are closed under addition, subtraction,   multiplication and composition. If any of the polynomial-time algorithms makes a fixed number of calls to subroutines   which have running time of polynomial time, the running time of the composite algorithm is also polynomial because polynomial time solvable problem has closure properties. An alphabet Σ is set of symbols of finite number of elements.  language   over Σ is any set of strings which is being made up  of symbols from Σ. For example if Σ = {0, 1}, the set L = {11110, 1111, 11001, 11001, 100011, 100101, 11,...} is the language of binary representations of numbers. An  empty string is being represented  by ε, and the empty language  is being represented by Ø. Every language L over Σ is a subset of Σ*.
There are  a variety of operations on languages. Some theoretic operations, such as union and intersection, can be apply to the languages. Generally we can say that an algorithm A accepts a string y if y is being given as input, the algorithm's output A(y) is 1. An algorithm A rejects a string y if and only if   A(y) = 0. Any  language L is said to be acceptable in polynomial time by an algorithm A if it is accepted by A and if in addition there is a constant k such that for any length-n string which is an element of  L, algorithm A accepts x in time O(nk). As some problems can   be    easily solved by the optimization algorithms in the   polynomial time in the worst case. For example all pair shortest path problems can be solved by Floyd Warshall’s algorithm which has the time complexity of O(n3 ). But there are still some problems which can not   be solved by the optimization algorithm in the polynomial time in the worst case. So researchers gave large number of approximation algorithm to solve such kind of the problems. Approximation algorithm gives the solution  which is approximate to the optimal solution of the problem and approximation algorithms have polynomial time complexity in the worst case. Examples of NP complete problems are traveling salesman problem, subset sum problem, vertex cover problem etc. Researchers are applying their best efforts to design new approximation algorithms, which have comparable less time complexity as compared to the existing approximation algorithms. Many  NP-complete problems are of practical use but we cannot  abandon these NP-complete problems merely because there is no any optimization algorithm which can solve these problems in the polynomial time in the worst case. If the actual inputs of particular problems are small, an algorithm which has exponential running time may be satisfactory. We may be able to isolate those special cases that are solvable in polynomial  time.  It may still be possible to find solutions which is approximate the exact solution  in polynomial  time  in the worst case. As it is well known fact that near-optimality solution is  good enough. An algorithm that returns the   solutions  which is approximate to the optimum solution is called an approximation algorithm. In maximization problem, we want to maximize the value of output to a particular problem while in the case of minimization problem we want to minimize the value of output to a problem. Let C be the cost of the solution   which is being produced by an approximation algorithm and C* be the cost of the solution   which is being produced by an optimization algorithm. Then  approximation ratio of ρ(n) of size n, the cost C of the solution produced by an approximation algorithm is within a factor of ρ(n) of the cost C* which represents the cost of the optimal solution. The definitions of approximation ratio apply for both maximization and minimization problems both. For a maximization problem, 0 < C ≤ C*, The   ratio C*/C gives the factor by which the cost of an optimal solution   which is obtained by the optimization algorithm is larger than the cost of the approximate solution which is obtained by the approximation algorithm. But for a minimization problem, 0 < C* ≤ C, and the ratio C/C* gives the factor by which the cost of the approximate solution   which is being obtained though the approximation algorithm is larger than the cost of an optimal solution which is obtained through the optimization algorithm. As it is being assumed that all costs are positive, so these ratios are always well defined. The value of approximation ratio of an approximation algorithm is never less than 1.  When the value of approximation ratio is one, it means the   optimal solution is being given by that algorithm. If the value of approximation ratio is close to one it means the solution    which is being given by the  approximation algorithm   is close to the optimal solution and vice versa. 
Researchers have already designed the   polynomial-time approximation algorithms   for some problems which have  small constant approximation ratios. while for other problems, the best known polynomial-time approximation algorithms have approximation ratios that grow as functions of the input size n. In   the set-cover problem if the size of  the input increases then the approximation ratio grows and vice versa. There is a trade-off between computation time and the quality of the approximation.
An approximation scheme is an approximation algorithm that takes as input  an instance of the problem as well as also a value €>  0 such that for any fixed €, the scheme is a (1 +€ )-approximation algorithm. Vertex-cover problem is  an NP-complete minimization problem. The traveling-salesman problem an NP-complete minimization problem in which the cost function   may satisfy or may not satisfy  the triangle inequality. The vertex-cover problem was defined and proven to  be NP-complete. In vertex-cover problem the goal   is to find a vertex cover of minimum size in a given undirected graph. An   approximation algorithm  which takes as input an undirected graph G=(V,E) and returns a vertex cover.  As vertex cover is the minimization problem in which goal is to select the minimum number of vertices from an undirected graph which can cover all the edges of the graph G.
An approximation algorithm for vertex cover problem
If G=(V,E) be the given graph.
If X represents an  initially   empty set.
E[G] represents the set of all edges in the graph G=(V,E).
1 X ← Ø
2 E′ ← E[G]
3 while E′ ≠ Ø
4            let us consider (a, b) be an  edge of E′
5           X ← X U{a, b}
6            remove from E′ every edge incident on   
              either a or b
7 return X
The above algorithm returns the set X which is the set of the vertices.
If adjacency lists is to be used to  represent E′ then the running time of this algorithm is O(V + E).Researchers are applying their best efforts to design the new approximation algorithms for NP complete problems like vehicle routing problems with time windows, deadline traveling salesman problem, orienteering problems, subset sum problem, vertex cover problem, set cover problem, prize collecting traveling salesman problem etc.  In  Vehicle Routing with Time-Windows problem, a graph G=(V,E) is being given with n number of vertices and  each node v also has a release  time R(v) and a deadline D(v) and a person starts his visit from the starting node r and the goal  of that person is to visit maximum number of vertices with in their time windows [R(v),D(v)]. But in deadline traveling salesman problem, the releasing time for every vertex is 0 and the goal is to visit the maximum number of vertices with in their deadlines.  In [34]an approximation algorithm for Deadline-TSP is being given which has time complexity of O(log n)  and then they extended this algorithm to design an  approximation algorithm for the Time-Window problem which has the time complexity of O(log2 n).

3.    Conclusion: In this paper,   a study on NP complete  problems along with the approximation algorithm for  NP complete problems like vertex cover problem  is being given and concluded  that scientists are applying their best efforts to design approximation algorithms for nondeterministic polynomial type problems which  have less time complexity and space complexity  as compared to the previously  existing algorithms and they are getting the success also in this field.

4. References:
1.    E. Horowitz and S. Sahni, (1982) "Fundamental of Computer Algorithms", Computer Science Press.
2.    T. Cormen, C. Leiserson and R. Rivest, (1991) "Introduction to Algorithms", MIT Press.
3.    A.V. Aho, J.E. Hopcroft and J.D. Ullman, (1974) "The Design and Analysis of Computer Algorithms", Addison-Wesley.
4.    A.V. Aho, J.E. Hopcroft and J.D. Ullman, (1984) "Data Structures and Algorithms", Addison-Wesley.
5.    S. Baase, (1988) "Computer Algorithms: Introduction to Design and Analysis", Addison-Wesley.
6.    L. Banachowski, A. Kreczmar and W. Rytter, (1991) "Analysis of Algorithms and Data Structures".
7.    R. Baeza-Yates, "Teaching Algorithms",(1995) IV Iberoamerican Congress on Computer Science Education, Canela, Brazil.
8.    J. Nievergelt and K. Hinrichs, (1993) "Algorithms and Data Structures with Applications to Graphics and Geometry", Prentice-Hall.
9.    G- Brassard, and P. Bratley, (1988) "Algorithmics: Theory and Practice", Prentice-Hall, 1988.
10.    E.M. Reingold, J. Nievergelt and N. Deo, (1977) "Combinatorial Algorithms: Theory and Practice",Prentice-Hall.
11.    H. Will, (1986) "Algorithms and Complexity", Prentice-Hall.
12.    M. Crochemore and W. Rytter, (1994) "Text Algorithms", Oxford Press.
13.    J.S. Vitter and P. Flajolet", (1990)"Average-case analysis of Algorithms and Data Structures", Handbook of Theoretical Computer Science (volume A), Elsevier and MIT Press, pp 431-524.
14.    F.P. Preparata and M.I. Shamos. (1985) "Computational Geometry: An Introduction", Springer Verlag.
15.    W. Frakes and R. Baeza-Yates, editors, (1992) "Information Retrieval: Data Structures and Algorithms",Prentice-Hall.
16.    A. Gibbons and W. Rytter, (1988) "Efficient Parallel Algorithms", Cambridge University Press.
17.    U. Manber, (1989) "Introduction to Algorithms: A Creative approach", Addison-Wesley.
18.    G. Gonnet and R. Baeza-Yates, (1991) "Handbook of Algorithms and Data Structures", Addison- Wesley.
19.    Blaine A. Price, Ronald M. Beacker and Inn S. Small, A Principled Taxonomy of Software Visualization, Journal of Visual Languages and Computing 4, pp 211-266,.