Quantcast
Channel: The Optics Talk Forums : Rifle Scopes
Viewing all articles
Browse latest Browse all 18935

Rifle Scopes : CAST YOUR VOTE: What's the best long-range scope?

$
0
0
Author: koshkin
Subject: CAST YOUR VOTE: What's the best long-range scope?
Posted: March/29/2014 at 16:59

Originally posted by J L J L wrote:

Yes, test was done between me and my colleague.
It is simplified version from our own in-house test that we use for benchmarking. Also very handy checking tool when our customer feels there is something wrong with optical performance with scope.

Resolution with colors is rude and simplified test alright. But we decided to offer it with with idea it is still much, much better than nothing- as forums are full of endless debate about "glass quality".
I tried hard to figure out how to produce sensible contrast part for test too, but it is impossible for many reasons. Bar test is bar test- easy to understand and produce- and with some practice it gets the job done IMO.

Idea was, and still is, to make database where each scope owner can add their own results. This way part of variation would be averaged away- but it is still rough method and naturally far from actual objective testing.

My problem with the test you came up with is not that it is far from objective testing.  Both objective and subjective testing have their place.  

Keep in mind that is what I do for a living: the company I work for makes both standard and custom test sets for a great variety of electro-optical devices, including riflescopes.  This subject is important to me since I deal with this every day: it is my job to figure out what the customer needs and come up with a test/calibration/metrology solution to fit his needs.

Using a reference scope will really help greatly and I hope people do it.  However, the simple fact that you are assigning numerical scores bring sin a variety of complications.

If you take your reference scope and run through a test on different days, with different lighting and after drinking varying amounts of coffee, you will end up with a different numerical score.  How will you then deal with the comparing scopes between days?

Let's say scope A is your reference scope.

You went ahead and compared it to scopes B, C and D and developed numerical rankings where scope B was the best, followed, by scope C, then A and then D (B>C>A>D).

Then, on some other day you ran another test with you reference scope (A) and three new scopes: E, F and G.

In this case, they stacked up in the following manner: E>A>F>G.

Moreover, the numerical score for scope A was notably higher the second time around, but not across the board:  since you were eating your carrots non stop between the tests, contrast and grey scale resolution were better, but for some unidentified lighting condition reasons green and red resolution were a bit worse.  As a matter of fact, the second time around scope A has a higher scope than scope C did the first time around.

How does scope E compare to scopes B and C?
How do scopes F and G compare to D?

Now you have actually acquired a lot of useful data and you can conclude that there is great likelyhood that scopes E, B and C are generally better than F, G and D.

Now, we run into a few other complications related to the overall configuration of the scopes: you suggested the tests are done at max magnification and half of the max.

Let's say a 5-20x50 scope and a 6-24x50 scope are a part of the same test.  The test was done twice with different lighting conditions.  When the light was a little lower, the first scope at 20x did better than the second one.  On a different day in brighter light, the second scope did better.  

FOV test: you evaluate the fidelity at the edge.  Let's say you compare two scopes with different FOVs.  The first scope has perfect edges and its FOV is 80% of the second scope.  The second scope has some edge deterioration, but is perfect through the middle 90% of its FOV.  The first scope would do better in your FOV test according to the instructions.

Then there is a simple matter of what FOV really means in a focal telescopes and which portion of the FOV of your eye does the FOV you see in the eyepiece takes.  If you really want to make this platform independent, you need to be looking at that.

A couple of other points:  at the limits of the scope's performance, looking at repeating bar patterns is tricky.  There is a reason people are going away from it and NATO standardized the TOD algorithm.

In the original hide thread there is a section on diffraction limited resolution (the Raleigh criterion is what you used if I remember correctly).  That formula is monochromatic in nature and has little relevance for polychromatic performance you are trying to test.

I can do some more nitpicking, but in the end it comes down to what is important for the user: someone who mostly shoots far away in broad daylight wants something a little different from someone who the guy whose primary purpose is shooting after sunset.

ILya

Viewing all articles
Browse latest Browse all 18935

Trending Articles