• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The dumb questions thread

Can someone explain to me the P versus NP problem. I read the Wikpedia entry and it didn't help me much. The most math I've had is business calculus.

One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.

So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.

In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").

The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?

It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(nc) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(cn) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2100000 is a very big number), while polynomials scale more reasonably (1000002 is big, but much, much smaller than 2100000).

Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.

We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.

We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain :p). These are the problems we can check that the answer is yes quickly (with a hint).

Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.
 
One question that arises when studying algorithmic problems is figuring out the efficiency of a given solution. For a given algorithm, we can figure out the worst-case running time (assuming a basic operation that takes 1 unit of time) that is parametrized by the size of the input.

So for example, in order to figure out the maximum value in a list of n numbers, in the model where comparing two numbers takes 1 unit of time, we check the first number against the second, then check the winner against the third, and then the winner against the fourth, etc, until we have checked all the numbers and found the maximum. This always takes n - 1 comparisons, so the running time is n - 1 in the worst case. That is, it runs in time linear with respect to the size of the input.

In general, we are interested in the asymptotic growth rate, and we say that the running time is O(f(n)) (read "Big-Oh of f of n"), meaning its asymptotic growth rate is bounded above by some constant times f(n). Similarly to O(f(n)), we say that an algorithm that needs a worst-case asymptotic running time of at least some constant times f(n) has running time Ω(f(n)) (read "Big-Omega of f of n"). If an algorithm is both O(f(n)) and Ω(f(n)), we say it is Θ(f(n)) ("Big-Theta of f of n").

The algorithm above solved the "find the maximum" problem in Θ(n) time, and we can argue that we need to check every number in the list at least once, so every possible algorithm needs at least Ω(n) time, so the "find the maximum" problem has been solved optimally asymptotically. In fact, it isn't hard to prove that every algorithm needs at least n - 1 comparisons so there is no algorithm that runs faster than the one above. So the next question to ask is, which problems do we have where there is a gap between the best known algorithm and the best known lower bound? In other words, where are we being inefficient?

It shouldn't be too surprising that there is a hierarchy of running times. Algorithms that run in worst-case time O(nc) for some constant c > 0 are said to run in polynomial-time as their growth rate is at most some polynomial in the input size n. Algorithms that run in worst-case time Ω(cn) for some constant c > 1 are said to run in exponential-time. When dealing with large instances, exponential time algorithms take essentially forever (2100000 is a very big number), while polynomials scale more reasonably (1000002 is big, but much, much smaller than 2100000).

Ideally, we want every problem we want to solve to have a polynomial-time algorithm. Unfortunately, there are problems that need exponential-time, so we need to be more specific and try to check that every 'reasonable' problem we want to solve has a polynomial-time algorithm. By reasonable, we are going to mean 'has a succinct yes-certificate' which means that maybe we can't find the answer in polynomial time, but if the answer is yes then there is a hint that we can use to verify the answer in polynomial time.

We call the collection of decision problems (where the output is either yes or no) that have polynomial-time solutions the complexity class P (P for `polynomial time'). These are the problems with 'fast' solutions.

We call the collection of decision problems that have a succinct yes-certificate the complexity class NP (NP for 'non-deterministically polynomial time' - the origin of which I will not explain :p). These are the problems we can check that the answer is yes quickly (with a hint).

Finally, we get to the question: Is P = NP? That is, is every problem that is efficiently yes-checkable also efficiently solvable? Most experts think no, but we don't have a proof yet.

A nice clear exposition. I wish I had the energy and bandwidth to go into these kinds of topics.
 
Thanks beero1000! That made sense to me.

My pleasure. It's a fascinating subject, and I could go on about it for a while. The disappointing reality though, is that we probably won't see a solution in our lifetimes. The problem is just too hard. Prevailing opinion is that Mulmuley's geometric complexity theory, applying algebraic geometry techniques to complexity theory, is the only known idea that has a chance of resolving it, but even so it will probably be a century or more. :( Here's to hoping that view is too pessimistic.

This is a good overview: http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext
 
What is an eclipse called if it's the Earth eclipsing the sun as seen from the moon? I'm guessing it's still a solar eclipse since the terms solar eclipse and lunar eclipse refer to what is being eclipsed and not what is doing the eclipsing.

A less dumb question: would that not be the coolest, most badass thing to see??
 
What is an eclipse called if it's the Earth eclipsing the sun as seen from the moon? I'm guessing it's still a solar eclipse since the terms solar eclipse and lunar eclipse refer to what is being eclipsed and not what is doing the eclipsing.

A less dumb question: would that not be the coolest, most badass thing to see??

Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.
 
Found an answer: NASA Earth Observatory says it's still called a lunar eclipse even if you're on the moon and the earth is eclipsing the sun from that view.

How very geocentric of them. IMO, it is a solar eclipse, when viewed from the Moon, as the Sun is the body being observed. Calling it a lunar eclipse simply because it is the same configuration of bodies suggests that the position of the observer is irrelevant, and that earthbound observers get naming rights.

Since when was NASA an authority on space, anyway? They can't even get an astronaut into low earth orbit, much less to the Moon.

(OK, full disclosure - nor can I).

- - - Updated - - -

...

Finally, we get to the question: Is P = NP? ...

Simple; P=NP if, (and only if) N=0.

Where do I collect my Fields Medal? :tomato:
 
I may word this poorly, but this IS the dumb question thread.

As I understand current cosmology, the universe we see around us expanded from an infinitely dense point some 13 billions years ago. It was at that point that space and time as we understand it, began. It would seem to me that we really are not able to describe what was "there" when time began and space started expanding. Yet, it seems physicists, at least some, are boldly stating that the universe came from...NOTHING. I wonder why they make that statement? It might actually be true, but how can they know? I read Kraus' book, A Universe from Nothing where he tries to lay out his ideas for how "nothing" might be unstable. I could take it as conjectural on his part, but a worthwhile thing to consider. But I am pretty sure he calims the universe came from nothing.

How can they know there was actually NOTHING? I realize why they might say there was no space or time, matter/energy, but how can they know it was not some other heretofore unimagined state which we have not yet contemplated much less discovered and certainly have been unable to model mathematically?

Like I said, it's a dumb question so try not to pick apart my understanding so much as, if possible, try to explain why some physicists...Hawking, Kraus to name a few seem to be promoting this idea. It seems the Bord, Guth, Valenkin theorem maintains, in Valenkin's words: 'f someone asks me whether or not the theorem I proved with Borde and Guth implies that the universe had a beginning, I would say that the short answer is “yes”. If you are willing to get into subtleties, then the answer is “No, but…” So, there are ways to get around having a beginning, but then you are forced to have something nearly as special as a beginning.'

It would seem at least they'd hedge their bets and acknowledge they don't know what was there, but explain they are studying the plausibility/possibility of a universe coming, literally, from nothing. It seems to me that there might very well turn out that some other region/realm/state existed/exists that IS outside of time and space and does not contain classic forms of matter/energy and they seem to be prematurely excluding this possibility without really knowing. But they are the experts so what am I missing...what does it mean?

If the question is too involved to explain, at least, point me to a book/website that explains why some physicists are promoting a universe specifically/exactly FROM NOTHING. Thanks
 
Last edited:
You are beyond actual physics when you talk about cosmology of “the beginning”. The reason they are saying that the universe came from nothing is because the mathematics works. The universe is still nothing mathematically. Take the mass and energy in the universe as positive energy and the gravity of the universe as negative energy. Add them and you get zero. The beginning was just when the positive and negative energy that amounted to zero separated (because it was unstable?) and we got what we now see. At least that is my understanding of their reasoning.
 
Dumb question:

Is the only reason we think humans originated in Africa is because we find remains there that date back millions of years? And the reason we still find those is because they were preserved because the area dried up and became arid?

Suppose humans evolved elsewhere outside of Africa, but the environment remained well watered and fertile and destroyed any remains before they could become fossilized?
 
Dumb question:

Is the only reason we think humans originated in Africa is because we find remains there that date back millions of years? And the reason we still find those is because they were preserved because the area dried up and became arid?

Suppose humans evolved elsewhere outside of Africa, but the environment remained well watered and fertile and destroyed any remains before they could become fossilized?

I recommend the book Guns, Germs, and Steel. It's been too long since I read it, so I can't explain or even remember the answer to your question. But I do remember that it persuaded me that they know what they're talking about.

Plus its a great read.
 
Dumb question:

Is the only reason we think humans originated in Africa is because we find remains there that date back millions of years? And the reason we still find those is because they were preserved because the area dried up and became arid?

Suppose humans evolved elsewhere outside of Africa, but the environment remained well watered and fertile and destroyed any remains before they could become fossilized?

Genetics. If you look at mitochondrial DNA, you see that the older DNA appears in populations in East Africa. There is also much greater genetic diversity in Africa, supporting the fact that humans originated there.

Of course, it's possible that an earlier group moved to Africa and then died out completely elsewhere without leaving DNA or fossil evidence, but that's unlikely to the extreme.
 
Genetics. If you look at mitochondrial DNA, you see that the older DNA appears in populations in East Africa.

That confuses me. How can some Human DNA be older than others? And how would we know?



There is also much greater genetic diversity in Africa, supporting the fact that humans originated there.

Okay, this part makes sense.
 
That confuses me. How can some Human DNA be older than others? And how would we know?

You look at lots and lots of humans' genes.

Assuming that mutations occur at a relatively constant rate (or occur at the same rate everywhere and everywhen regardless of whether it's constant), the genes that have the most mutations on them are older than the ones that don't.

(Most mutations means the largest number of distinct DNA sequences for a particular gene in your big collection of that gene from lots of humans....)
 
Dumb question:

Is the only reason we think humans originated in Africa is because we find remains there that date back millions of years?
No. Before we ever found any, Darwin said we probably originated in Africa and advised fossil hunters to look there, because our nearest relatives are chimps and gorillas and they live in Africa. (Not a dumb question, by the way. It was highly controversial among anthropologists pretty much all the way until Piltdown Man was proven to be a hoax.)

Suppose humans evolved elsewhere outside of Africa, but the environment remained well watered and fertile and destroyed any remains before they could become fossilized?
Depending on the time frame you care about, that's kind of what happened -- the long transition from monkey to ape happened in Asia, and then some apes moved to Africa. That counts as human evolution too, no?
 
Back
Top Bottom