• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Can more info be gained by panning a telescope?

repoman

Contributor
Joined
Aug 3, 2001
Messages
8,613
Location
Seattle, WA
Basic Beliefs
Science Based Atheism
http://curious.astro.cornell.edu/about-us/45-our-solar-system/the-moon/the-moon-landings/122-are-there-telescopes-that-can-see-the-flag-and-lunar-rover-on-the-moon-beginner

Imagine that you could pan a telescope looking at the moon from earth with insanely high precision (almost infinitesimal) and compare images that were displaced by a small fraction of the resolution. By this I mean say the resolution was 100 meters, but you could take images that were only 1 meter apart (ignore feasibility for now), could you somehow tease out more info? What about taking into account slight variations in the light intensity at each of the pixels of the camera sensor and running it through data processing? Kind of like adaptive optics.

Again this is not about practicality, but about what would be possible at the mathematical limit or beyond it. 1 meter of panning precision at ~240,000,000 meters distance seems tough.
 
Short answer is no.
Panning can help if your resolution is limited by pixel size of your CCD camera.
In telescopes pixels are so small that resolution is limited by diffraction. So No, it won't help.
 
There already is a technique like this, but instead of panning a single telescope (which doesn't really do much), it combines data from multiple telescopes.
 
There already is a technique like this, but instead of panning a single telescope (which doesn't really do much), it combines data from multiple telescopes.
That's not the same. In fact it's completely different. It's making bigger telescope from a few smaller ones by combining them.
 
Last edited:
http://curious.astro.cornell.edu/about-us/45-our-solar-system/the-moon/the-moon-landings/122-are-there-telescopes-that-can-see-the-flag-and-lunar-rover-on-the-moon-beginner

Imagine that you could pan a telescope looking at the moon from earth with insanely high precision (almost infinitesimal) and compare images that were displaced by a small fraction of the resolution. By this I mean say the resolution was 100 meters, but you could take images that were only 1 meter apart (ignore feasibility for now), could you somehow tease out more info? What about taking into account slight variations in the light intensity at each of the pixels of the camera sensor and running it through data processing? Kind of like adaptive optics.

Again this is not about practicality, but about what would be possible at the mathematical limit or beyond it. 1 meter of panning precision at ~240,000,000 meters distance seems tough.
Ought to be possible -- more information always helps if you know how to process it. Sounds like deblurring a photo by constructing hypothetical mathematical models of the photographed object and of the imperfections in the camera optics, simulating the light waves in order to reconstruct an approximation of the photo, and then optimizing the bejesus out of the parameters of your models using a conjugate gradient algorithm to incrementally improve the match between the simulation and the original blurry image. It's computationally expensive.

Barbos is right about diffraction, but as we've discovered in the semiconductor industry, what was once the diffraction limit is now the diffraction challenge.
 
There already is a technique like this, but instead of panning a single telescope (which doesn't really do much), it combines data from multiple telescopes.
That's not the same. In fact it's completely different. It's making bigger telescope from a few smaller ones by combining them.

He's taking about planning a single telescope to get more information. This does a better job of getting more information.
 
Information theory suggests there is more information in moving sources than in stationary ones. If there is enough information in the referring prior signal then one might use a technique similar to analyzing moving toward and moving away from sound source which is a situation where velocity can be gauged as well as frequency. In minimum tonal duration case that would be where motion increases information available. I tried something like that with sound for my dissertation when I used moving sound source to get at minimum audible movement angle which, at low rates of motion, that was less than minimum audible angle.

Of course this approach assumes you have a grasp of what your are trying to enhance. Because at larger movement rates audible movement angle became larger than minimum audible. In those cases one goes from discrimination of space intervals to discrimination of rates.

Although I really know very little about visibility of light at distance I'm pretty sure those minimums need be reached prior to getting measurable movement between two images. In that task I think it's more like getting tonality out of a tone burst.
 
That's not the same. In fact it's completely different. It's making bigger telescope from a few smaller ones by combining them.

He's taking about planning a single telescope to get more information. This does a better job of getting more information.


For something very far away like the moon, using one telescope to take pictures at different moments seems equivalent to using different telescopes to take pictures at the same time. But I can't see how you could merge the light rays optically with one moving telescope as it's routinely done with several fixed ones. And, if not, then diffraction won't be similarly reduced.

There's also the issue of the effect on light rays of thermal layers of the atmosphere in front of the telescope. The effect is broadly to move pixels around a bit. This could be improved by combining several shots and taking the average for each pixel.

That wouldn't apply to Hubble since it's not subjected to the effect of the atmosphere on the earth.

As I type this, looking up I just realised a full moon is reflected right off a window pane high on a building in front of my window. It looks a bigger and twisted moon.

I wait a bit. Now it's gone.
EB
 
That's not the same. In fact it's completely different. It's making bigger telescope from a few smaller ones by combining them.

He's taking about planning a single telescope to get more information. This does a better job of getting more information.
I know what he is talking about, that's why I said it's not gonna work.
Pixels are small enough for it not to work.
 
I dunno. If one treats visible objects then there must be a of number of pixels all receiving photons at least two or more times to confirm object image. With rules similar to that one can conceive of a process of scope movement all where successive likely images hits are gathered so one can get more from one selected image than just keeping the scope on the image for saturated viewing.
 

he "Panned" the scope in his review.... To "Pan" something is to point out its flaws to the point of categorizing it as useless... a scathing review pans the product.
Pun on panning a camera, in terms of changing the direction it is pointing.

HaHa.
 

he "Panned" the scope in his review.... To "Pan" something is to point out its flaws to the point of categorizing it as useless... a scathing review pans the product.
Pun on panning a camera, in terms of changing the direction it is pointing.

HaHa.

Oh, you weren't talking about mounting the telescope in a cooking utensil?
 
Barbos is right about diffraction, but as we've discovered in the semiconductor industry, what was once the diffraction limit is now the diffraction challenge.
It's still a limit. First when they say 10 nm photolithography it does mean they can make any arbitrary things with that size at will. it's more like a precision with which they can form structures which are certainly larger than 10 nm. For example 10 nm lithography have wires I think 50 nm thick. but accuracy with which these wires are placed is 10nm or something to that effect.
Second, multiple mask application can make you form some structures which are in 10nm range like transistor gates. None of it can be used in telescopes unfortunately.
You mentioned Unblurring. It works only in the classical limit where you have no noise at all. No noise means infinite amount of time for exposure. In other words, you can unblurr it just fine, but your image will be nothing but noise.
 
For a circular Newtonian telescope the diffraction limit is a small area in the focal plane called the Airy Disk. That spot projected out into space is the resolution barring aberrations. Diffraction effects plus aberrations is called the 'blur spot' in the focal 0plane. There is a tradeoff between magnification, aperture, field of view, and resolution. More aperture more light.

Digital image processing cac interpolate between pixels, and bring out edges.

Try searching newtonian telescope angular resolution.
 

he "Panned" the scope in his review.... To "Pan" something is to point out its flaws to the point of categorizing it as useless... a scathing review pans the product.
Pun on panning a camera, in terms of changing the direction it is pointing.

HaHa.

OK. I'll change my remark to so?
 
I suppose you could have a high magnification scope with a tiny field of view theoretically.

If you could panning it or scanning a distant object would be difficult if not impossible.
 
I had another thought on panning a scope... A really cool trick when you are looking at a globular cluster, like Pleiades, is to wiggle the focus slightly in and out as you look at it. Think of it as "panning" in and out.. you are rapidly changing focus from the very closest stars in the cluster, to the very farthest stars in the cluster. the result is a pseudo-3D image of the cluster... your eyes adapt to the rapid focus change and you perceive depth... millions and millions of light-years of depth. It's pretty neat. And I would certainly say it is "more information".
 
Back
Top Bottom