Now that I'm rereading my book on NP-Completeness, I see
that I was wrong
about the definition of NP. In a previous journal I stated that NP
means
non-polynomial, actually it means nondeterministic polynomial. NP
problems
are ones that with a nondeterministic computer can be done in
polynomial
time (I'm extrapolating here a bit). In other words, a computer than
can
search through all the possible solutions at once in polynomial time.
Which leads to the conclusion that a quantum computer (should we ever
manage
to build one to any practical scale) will be able to solve NP problems
in
polynomial time. It's pretty amazing that a long held belief (though
admittedly unproven) that NP problems can't be done in polynomial time
can be disproved by something that is just so out of the realm of
current
thinking. I'm talking about using a principle that people can't touch,
feel,
have no intuitive sense of.
On further reading of the book ("Computers and Intractability: A Guide
to
the Theory of NP-Completeness") I realize that I've forgotten so much
since
graduating from college. A lot of the NP-Complete problems being proven
I
remember the name, but usually not what the problem is and certainly
not how
to prove its NP-Completeness. And the proofs don't even involve any
higher
math, not even differentials or matrices or anything like that.
I've never been all that good at the higher level mathematics. I mean,
most of
it is not that hard when you look at it. I could read the sections, do
the
problems and I did fine. But I could never really retain what I
learned,
partly I think because I never intuitively got a grasp of the concepts.
Proofs
especially I had problems with. The problem is laid out with x, y, z
and
exponents n, m, k, l and other variables a, b, c and indices i and j
and then
the proof. It's just really hard for me to picture it in my mind.
|
Take the problems in the book I'm reading. For example,
a Hamiltonian Circuit:
Given a graph G = (V, E), a Hamiltonian Circuit is an ordering of the vertices of G, where n = | V |, such that
{v[n], v[1]} is an
element of E and {v[i], v[i+1]} is an element of E for all i, 1 <= i
<= n.
(I'm using [x] instead of subscripts). Now, I know what a Hamiltonian
Circuit
is: it's a path through a graph that hits every vertex and ends up at
the
starting vertex.
I don't know about you but it's a lot easier for me to picture the
second
definition rather than the first. And that's the whole problem of
higher level
math. Everything is described mathematically. Sure it's accurate, but a
simple
sentence and a picture would help a lot. I can muddle through, take an
hour to
fully understand what's going on, but that's not going to stick. And a
lot of
the other math is the same way too. Derivatives and integrals, it's
easier to
think of them as acceleration and distance and the base formula as
speed.
The difference between a programmer and an engineer is that a
programmer
doesn't know any of this stuff. Can't do any of this stuff. An engineer
can
say that this problem is going to be computationally hard to solve, and
she'll
be able to prove it mathematically. And then she'll be able to look at
other
ways to solve the problem or modify the problem so that it then becomes
easy
to solve with a computer. Programmers just code. Maybe they'll use a
profiler
and try to optimize their code or use various design principles to make
their
code more efficient and easier to maintain. But when you get right down
to it,
programmers don't design the next Deep Blue, or the next 3d engine, or
the
next speech recognition engine. Engineers do. That's one reason I don't
call
myself an engineer.
|