The first major question we need to answer is the following: How should we turn the fuzzy notion of an “efficient” algorithm into something more concrete?

A first attempt at a working definition of efficiency is the following.

Proposed Definition of Efficiency (1):

  • An algorithm is efficient if, when implemented, it runs quickly on real input instances.

Proposed Definition of Efficiency (2):

  • An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search.

Proposed Definition of Efficiency (3):

  • An algorithm is efficient if it has a polynomial running time.

Note: It really works. Problems for which polynomial-time algorithms exist almost invariably turn out to have algorithms with running times proportional to very moderately growing polynomials like $n$, $n\log{n}$, $n^2$, or $n^3$. Conversely, problems for which no polynomial-time algorithm is known to be very difficult in practice.

All this serves to reinforce the point that our emphasis on worst-case, polynomial-time bounds is only an abstraction of practical situations.

In particular, the first our definitions, which was tied to the specific implementation of an algorithm, turned efficiency into a moving target: as processor speeds increase, more and more algorithms fall under this notion of efficiency.

Our definition in terms of polynomial time is much more an absolute notion; it is closely connected with the idea that each problem has an intrinsic level of computational tractability: some admit efficient solutions, and others do not.