178

I'm not sure where this question should go, but I think this site is as good as any.

When humankind started out, all we had was sticks and stones. Today we have electron microscopes, gigapixel cameras and atomic clocks. These instruments are many orders of magnitude more precise than what we started out with and they required other precision instruments in their making. But how did we get here? The way I understand it, errors only accumulate. The more you measure things and add or multiply those measurements, the greater your errors will become. And if you have a novel precision tool and it's the first one of its kind - then there's nothing to calibrate it against.

So how it is possible that the precision of humanity's tools keeps increasing?

Qmechanic
  • 220,844
Vilx-
  • 3,501

21 Answers21

117

I work with an old toolmaker who also worked as a metrologist who goes on about this all day.

It seems to boil down to exploiting symmetries since the only way you can really check something is against itself.

Squareness: For example, you can check a square by aligning one edge to the center of straight edge and tracing a right angle, then flip it over, re-align to the straight edge while also trying to align to the traced edge as best you can. Then trace it out again. They should overlap if the square is truly square. If it's not, there will be an angular deviation. The longer the arms, the more evident smaller errors will be and you can measure the linear deviation at the ends relative to the length of the arms to quantify squareness.

Other Angles: A lot of other angles can be treated as integer divisions of 90 degree angle which you obtained via symmetry. For example, you know two 45 degrees should perfectly fill 90 degrees so you can trace out a 45 degree angle and move it around to make it sure it perfectly fills the remaining half. Or split 90 degrees into two and compare the two halves to make sure they match. You can also use knowledge of geometry and form a triangle using fixed lengths with particular ratios to obtain angles, such as the 3-4-5 triangle.

Flat Surfaces: Similarly, you can produce flat surfaces by lapping two surfaces against each other and if you do it properly (it actually requires three surfaces and is known as the 3-plate method), the high points wear away first leaving two surfaces which must be symmetrical, aka flat. In this way, flat-surfaces have a self-referencing method of manufacture. This is supremely important because, as far as I know, they are the only things that do.

I started talking about squares first since the symmetry is easier to describe for them, but it is the flatness of surface plates and their self-referencing manufacture that allow you to begin making the physical tools to actually apply the concept of symmetries to make the other measurements. You need straight edges to make squares and you can't make (or at least, check) straight edges without flat surface plates, nor can you check if something is round...

"Roundness": After you've produced your surface plate, straight edges,and squares using the methods above, then you can check how round something is by rolling it along a surface plate and using a gauge block or indicator to check how much the height varies as it rolls.


EDIT: As mentioned by a commenter, this only checks diameter and you can have non-circular lobed shapes (such as those made in centerless grinding and can be nearly imperceptibly non-circular) where the diameter is constant but the radius is not. Checking roundness via radius requires a lot more parts. Basically enough to make a lathe and indicators so you can mount the centers and turn it while directly measuring the radius. You can also place it in V-blocks on a surface-plate and measure but the V-block needs to be the correct angle relative to the number of lobes so they seat properly or the measurement will miss them. Fortunately lathes are rather basic and simple machinery and make circular shapes to begin with. You don't encounter lobed shapes until you have more advanced machinery like centerless grinders.

I suppose you could also place it vertically on a turntable if it has a flat squared end and indicate it and slide it around as you turn it to see if you can't find a location where the radius measures constant all the way around.


Parallel: You might have asked yourself "Why do you need a square to measure roundness above?" The answer is that squares don't just let you check if something is square. They also let you check indirectly check the opposite: whether something is parallel. You need the square to make sure the the gauge block's top and bottom surfaces are parallel to each other so that you can place the gauge block onto the surface plate, then place a straight edge onto the gauge block such that the straight edge runs parallel to the surface plate. Only then can you measure the height of the workpiece as it, hopefully, rolls. Incidentally, this also requires the straight edge to be square which you can't know without having a square.

More On Squareness: You can also now measure squareness of a physical object by placing it on a surface plate, and fixing a straight edge with square sides to the side of the workpiece such that the straight edge extends horizontally away from the workpiece and cantilevers over the surface plate. You then measure the difference in height for which the straight edge sits above the surface plate at both ends. The longer the straight edge, the more resolution you have, so long as sagging doesn't become an issue.


From these basic measurements (square, round, flat/straight), you get all the other mechanical measurements. The inherent symmetries which enable self-checking are what makes "straight", "flat", "round", and "square" special. It's why we use these properties and not random arcs, polygons, or angles as references when calibrating stuff.


Actually making stuff rather than just measuring: Up until now I mainly talked about measurement. The only manufacturing I spoke about was the surface plate and its very important self-referencing nature which allows it to make itself. That's because so long as you have a way to make that first reference from which other references derive, you can very painstakingly free-hand workpieces and keep measuring until you get it straight, round or square. After which you can use the result to more easily make other things.

Just think about free-hand filing a round AND straight hole in a wood wagon wheel, and then free-hand filing a round AND straight axle. It makes my brain glaze over too. It'd also be a waste since you would be much better off doing that for parts of a lathe which could be used to make more lathes and wagon wheels.

It's tough enough to file a piece of steel into a square cube with file that is actually straight, let alone a not-so-straight file which they probably didn't always have in the past. But so long as you have a square to check it with, you just keep correcting it until you get it. It is apparently a common apprentice toolmaker task to teach one how to use a file.

Spheres: To make a sphere you can start with a stick fixed at one end to draw an arc. Then you put some stock onto a lathe and then lathe out that arc. Then you take that work piece and turn it 90 degrees and put it back in the lathe using a special fixture and then lathe out another arc. That gives you a sphere-like thing.

I don't know how sphericity is measured especially when lobed shapes exist (maybe you seat them in a ring like the end of a hollow tube and measure?). Or how really accurate spheres, especially gauge spheres, are made. It's secret, apparently.

EDIT: Someone mentioned putting molten material into freefall and allow surface tension to pull it into a sphere and have it cool on the way down. Would work for low tech production of smaller spheres production and if you could control material volume as it was dropped you can control size. Still not sure how precisely manufactured spheres are made though or how they are ground. There doesn't seem to be an obvious way to use spheres to make more spheres unlike the other things.

DKNguyen
  • 9,389
62

The more you measure things and add or multiply those measurements, the greater your errors will become.

Not necessarily. If the errors in a series of measurements are independent and there is no systemic bias in the measurements then taking the average of all the measurements will give you a more accurate value than any individual measurement.

gandalf61
  • 63,999
34

One thing I haven't seen mentioned is amplification.

Amplification: Imagine you have a lever that is 10 cm on one side of the pivot and 1 m on the other. Then any change in position on the short side is amplified 10 times on the long side. This new precision can be used to make new even more precise tools. Rinse and repeat.

Even just by iterating through this crude method over and over you will eventually get to levels of precision far beyond what the human eye can detect.

27

That's a really nice one! I'm not an expert on experiments and measurements but this is how I see it:

The ultimate calibration tool is always nature. We pick special phenomena which rely on certain parameters.
Take temperature, for example. The phenomenon here is the phase transition of water. You set boiling water to a $100^\circ$ C and freezing to $0^\circ$ C and calibrate all measurement instruments (for example a mercury thermometer) around it.

But sometimes, one does not have such a perfect phenomenon in nature. Length is such an example. In these cases, the most precise instrument sets the standard.
At some point in France, someone forged a rod to define the metre. Later, someone else measured the speed of light based on the definition of the metre. When it became apparent that measurements of the speed of light are actually very precise (since time can be quantified precisely through the decay of atoms and we are very good at optics), the speed of light was set to a fixed value ($299.792.458 ~m/s$), now in reverse defining the metre. And this is how we calibrated the now most precise measurement of length.

Cream
  • 1,658
21

Measurement errors can accumulate, yes.

But we are not talking about measurements here, we are talking about processes and tooling. That's another deal.

If you fling off a piece of flint from a flintstone by banging another stone into it, then the "precision" of that other stone doesn't matter. What matters is that you supply enough force - not precise force but enough force - to interfere with the structure of the flint in line with its internal structure (such as along grain boundaries, along fibres etc depending on material) .

The flint piece might come off sharp as a knife due to the way it fractures. The "precision" of this new tool you just created (the knife-like flint piece) depended on the disruption of its internal structure, not on the tool used to make it.

Steeven
  • 53,191
17

The way I understand it, errors only accumulate.

That is not always the case. Human ingenuity found and systematically selected processes which improved a particular quality. Two examples:

  • You can hand polish a spherical mirror or even a plane mirror down below 100 nm surface roughness with easily available raw materials.
  • Zone melting of semiconductor or other crystalline materials can yield a final product with less than 1:1,000,000,000 impurity. You only need an oven with the right design.
g.kertesz
  • 996
15

I expect that worm drives are part of the story.

For every rotation of the driven axis the driving axis goes through multiple rotations. I expect that it is mechanically possible to capitalize on that ratio.

Other than that, forms of very high level consistency can be achieved with very low level means. A necessary tool for precision measurement is a large flat surface.

Flat surfaces like that are fabricated by grinding stone slabs against each other. If you would use just two slabs then you cannot prevent the possibility of one slab ending up slightly concave and the other slightly convex. So to fabricate flat surfaces a set of three slabs is used, changing out the pairing from time to time. So that way you can create very high precision flat surfaces, that are subsequently used as reference.

I assume similar procedures are used to create high precision instruments that check for squareness.

I recommend you look up articles and videos about the process of 'scraping'. The process of scraping involves steps where a high quality flat stone is used as reference to achieve a high quality of flatness of a machine bed.

The lead screw that moves the table must be manufactured with precision but I expect that a milling machine can create parts to a higher level of precision than the level of precision of its own parts. The lead screw has a small pitch compared to its diameter; a complete turn of the table feed wheel results in a relatively small displacement of the table.

See also:
youtube video Origin of precision
youtube video How to Measure to a MILLIONTH of an Inch (The Dawn of Precision)

Cleonis
  • 24,617
8

In addition to excellent answer of Cream. Let us not confuse precision and accuracy.

Accuracy is the ability to obtain correct measurement in absolute scale. Following Cream's example, if you use a metal slab to define length of a meter, then indeed all your measurements are limited by how well you can produce and measure such slab. E.g. the reference platinum-iridium meter rod used at the beginning of 20th century was manufactured with 200 nm uncertainty, which limits relative accuracy of all measurements based on it to $2\cdot 10^{-7}$. This is why at some point we begun to use a more reliable measurement speed of light to define the meter.

Precision is the ability to distinguish small differences in the quantity you measure. Again, read the history of measurements of light speed, it's a truly amazing lecture.

Let's look at the example of Fizeau-Foucault experiment with rotating mirror (https://en.wikipedia.org/wiki/Fizeau%E2%80%93Foucault_apparatus). Notice how imperfections of the device propagate to your final result. If you measure the mirror rotation speed with e.g. 1% precision (I think they actually did better), this propagates to only 1% in your time measurement, even if it's a time period way too short to measure directly.

If instead you conducted a Galileo-style experiment where you attempt to measure time of light propagation over a distance of e.g. 1km (today we know it's ~3µs) using a 0.01s precision stopwatch, your uncertainty would be 0.01s/3µs = 300'000%, far less useful (though Galileo's work had significance at his time).

This example demonstrates how it is both important and possible to design your measurement device in a way that is either insensitive to its own imperfection, or at least minimizes their impact on the final measurement.

user1079505
  • 1,009
  • 6
  • 18
7

I would also like to emphasize the importance of periodic waves which can magnify tiny errors. The most precise length measurements ever, by LIGO, are done with interferometry. A similar principle was used in the Michelson Morley experiment.

This also allows us to measure the speed of light for waves of different frequencies over long distances (gamma ray bursts over the distance of the observable universe).

Edit: So, if we imagine that light waves are sort of uber-precise ruler, it's not as though we've contructed this more precise ruler out of a less precise ruler, but it's more like we "found" a more precise ruler in nature (which we are able to use because understand the laws of physics) and decided to use that one instead.

user1379857
  • 12,195
6

If an instrument involves counting, just count more

Some measurements are done by counting things. For example, the definition of the second is n = 9,192,631,770 cycles produced by Cs-133. The number of whole cycles that have elapsed on an atomic clock are exactly known, even if there is some uncertainty in the additional fraction of a cycle. No matter how many cycles have elapsed, the uncertainty is always one cycle.

So the trick to improve precision is to just count more cycles. Counting n cycles gives you one second, with an uncertainty of one cycle. Counting 1000n gives you 1000 seconds, but the uncertainty is still one cycle. Counting one billion n gives you one billion seconds, still with an uncertainty of one cycle. The absolute precision is always one cycle, but the relative precision improves by increasing how much you count.

The only limit is how high you are willing and able to count.

rob
  • 96,301
DrSheldon
  • 772
  • 1
  • 9
  • 17
5

I'll point out that you can use multiple low-precision measuring tools in conjunction to get a measurement that is more precise than any of the tools individually. This is the idea behind a Vernier scale, which essentially uses two rulers of different spacing to interpolate distances between the markings. A Vernier scale can use two rulers that are marked at no less than millimeter spacing to precisely measure distances to the hundredth of a millimeter - this is possible despite the fact that none of your tools can measure that precisely individually! Basically, by comparing measurements among low-precision tools, you can get a high-precision measurement.

5

You exploit the relationships between theory and the thing you want to measure. You come up with devices that are, in a sense, independent from the error. Let's focus on time.

Imagine you wanted to build a clock. You go to your theory toolbox and look around. Immediately you notice things like pendulums and springs. Theory tells us that 'ideal' versions of these things completely fit the bill. But theory also tells us that in real life objects are not 'ideal'. There's all sorts of things that go wrong when you try and build one, both internally (try and build a pendulum with a rope of zero mass) and externally (air resistance, etc). In other words, the theory is simple but trying to implement a real world version of it is complex. That's where the idea of independence comes in. You don't need a more accurate clock to know that lessening air resistance will improve the regularity of the oscillations, that comes from your knowledge of the theory. You can greatly improve the performance of the device simply by making the device closer to its idealized version -- you lengthen the rope of the pendulum, you make the thing swinging at the end extremely heavy and dense vs the weight of the rope and so on.

After a while you start to hit a wall where your improvements to the design become increasingly marginal vs the effort. Luckily it's now the early 1900s -- some other people have been playing around with this electricity thing for about a century and have developed a lot of theory around it. You exploit this theory and make a quartz watch. But this too ultimately has implementation limitations. And then atomic theory is developed and you realize you can exploit the vibrations of atoms and build a device that is even closer to the 'ideal' version of itself. Interestingly, as this happens the theories become significantly more complex, but the implementations in a way become simpler. In other words, a quartz watch is much closer to its theoretical equivalent than any pendulum is to an ideal pendulum. This is why you can buy a quartz watch for a couple dollars that outperforms basically any physical watch.

So the story of time measurement is really the story of people using theory to come up with devices whose real world versions can more closely match the theoretical versions of themselves.

eps
  • 1,219
4

From another angle: consider predictiing future systems behavior. If you have tools that measure an observable effect (maybe latitude of sunrise vs. day-of-year) to a crude accuracy, then design and build a new tool, see if the new tool fits observed data better, with more precision ,.. and predicts future events better, then you know you've improved the state of the art.

Carl Witthoft
  • 11,106
4

Why can't you use imprecise tools to make more precise tools? Consider how humans created hammers and anvils, they must have used lumps of metal to make a vague outline of a hammer, and then used that to create a hammer with a handle, then used that to create other tools such as molds to standardize production. It seems to me that a better comparison that an imprecise tool takes LONGER to measure or do a certain work, and a more precise tool can do that same work quicker and better. There's some cut off point, a hammer can't work at the atomic level, but it seems like a ladder that each technology builds off the previous.

Issel
  • 149
2

There is a nice answer by Steeven, I feel like I would like to add some interesting notes.

The first ever particle accelerator was built in the 1930s and it was just a 100mm in diameter. Today, we are using particle accelerators that are giant (The Large Hadron Collider (LHC) is the world's largest and highest-energy particle collider and the largest machine in the world. It lies in a tunnel 27 kilometres (17 mi) in circumference and as deep as 175 metres (574 ft) beneath the France–Switzerland border near Geneva.) in size and can reach ever bigger energy levels.

https://en.wikipedia.org/wiki/Particle_accelerator

We are using ever newer (in this case bigger) tools as you say, to make ever more precise measurements (scattering experiments) to test the quantum realm.

In quantum mechanics, if you want to probe physics at short distances, you can scatter particles at high energies. (You can think of this as being due to Heisenberg's uncertainty principle, if you like, or just about properties of Fourier transforms where making localized wave packets requires the use of high frequencies.) By doing ever-higher-energy scattering experiments, you learn about physics at ever-shorter-length scales. (This is why we build the LHC to study physics at the attometer length scale.)

A list of inconveniences between quantum mechanics and (general) relativity?

So I believe this is a beautiful example of how as you say we use newer (bigger and bigger) tools and the "precision" of these new tools depends on how powerful (what energy levels they can reach) they can be built, and ultimately we are using these giant tools to probe the smallest possible world, the quantum realm.

2

Related to other answers, UW physics professor John Sidles presented this as a puzzle about a decade ago:

https://rjlipton.wordpress.com/2012/04/25/cutting-a-graph-by-the-numbers/#comment-20036

You are marooned on a desert island whose sole resources are sand, sea, and coconut trees. Construct a surface flat to atomic dimensions.

The following answer beautifully blends engineering, physics, and mathematics.

• Construct a fire-bow from bark and branches (engineering)

• Start a fire, and melt sand into three irregular lumps of glass (physics)

• Using sea-water as a lubricant, grind each shape against the other two (mathematics)

Continue this grinding process, varying the pairing and direction of the grinding strokes, until three flat surfaces are obtained. Then it is an elegant theorem of geometry that three surfaces that are mutually congruent under translation and rotation, must be exactly flat.

Essentially in consequence of this geometric theorem (which was known to Renaissance mirror-grinders), all modern high-precision artifacts are constructed by (seemingly) low-precision grinding processes... the quartz-sphere test-masses of Gravity Probe B are perhaps the most celebrated examples.

1

I think this has a place here, but seems to be asked in a manner where more engineering concepts can be utilized.

Take electron microscopes. All's you really need is a vacuum, any manner in producing free electrons into the vacuum, and then a focusing magnetic field. You can experiment with the strength of the electron beam, and degree of focusing. Then two orthogonal fields are used of varying strengths to deflect the beam an x and y direction. Any conductor and capacitor will absorb the reflected electrons and store a voltage, that voltage is read, and the beam is offset slightly again. The stored voltages are used to represent brightness of an image pixel, and you look at the image. You play around with all these voltages and get higher magnifications.

Here nobody ever had to build small parts. The vacuum can be produced in a variety of different ways, the electrons can be stripped from ionized gas, or ejected from metals.

Similar is the process of many nan technologies. They use silk screens, chemical emulsion, or image focusing lenses.

The limiting factors end up being the physical theories, which are probably approximated well enough by basically approaching the atom scale. And often this can be reached. But to go to precisions beyond the atom, they go to cern which can be viewed analogously as a large scale electron microscope, which is also much much slower version of the same process. And today they are presumably still assembling this image.

Then there are different statistical concepts used in chemistry, to isolate chemical compounds via stirring or agitating. Alot of these things use classical particles to model things, and are long complicated process. This is how they can extract dna and other things. Which originally exist in tiny proportions. They also can engineer reactions which magnify the small presence of a substance.

On the large scale, it is different yet again. Watch makers can build highly precise and efficient watches, useful for celestial observation, navigation, with reduced error propagations. In some space vehicle applications such large scale engineering can still be the only solutions.

As you can see with these three examples, the mechanisms which dictate the degree at which precision can be obtained, are wildly unrelated mostly. Instead dictated by the predictable aspects of nature in the regime in question. There does seem to be reduction of applicability on larger scales. But this could be due to the broad range coupled with our expectations for smaller scales. Also many times large equipment is unfeasible, requiring a lot of resource.

There does appear to be empirical observation of a common phenomena were increased precision is obtained by increasing the size of some parameter. However as we look at the examples, typically this gain is small and limited ounce the proportions are stretched beyond what the process had been developed for. This is were we see that a sound theoretical understanding yields the greatest gains, by aiding the intuition in the development of an entirely new process optimal for the regime of phenomena in question. An example of this would be when scientist use celestial observations on large scales to validate theories where such devices would be impossible to achieve merely by expanding some parameter' size.

1

Greeks used a compass and straight edge. A compass is as precise as an A-frame anchored with a needle, and a straight edge is some wood that is sanded flat by a carpenter. One wooden straight edge can be used to create a straighter straight edge because the sanding grit is small and it evens out the bumps.

Distances were an issue until the micrometer which gave birth to nano precise engineering. There's very good documentaries about early micrometers. Later micrometers and straight edges become laser precise, as reliable as photons, i.e. relatively accurate, not perfectly.

bandybabboon
  • 1,479
1

Your intuition that you can only get precision from precision is correct, but your conclusion that you need precise instruments to get precision is incorrect. The trick is always to use precision found in nature to get precision. You should actually look at how people make things to get an idea of where they are getting their desired precision from. For example, one can obtain a nearly perfect parabolic mirror by using the physics of a rotating liquid under uniform gravity. Another is to get a very precise measurement of time using the physics of atomic state transition, which is a clear example of using the central limit theorem to obtain a statistical guarantee on the precision of the output of the atomic clock.

user21820
  • 2,937
1

In error analysis, the errors multiply/add up (basically effect) only if all these sources of error are independent. That need not be the case always. And further using different instruments like moving form Cloud Chamber detector to Semiconductor detectors have increased the resolution my 100s of magnitudes, the reason being some materials are more sensitive than other, and you can always create new materials!

0

Measure many input variables. Compute the resultant measurement upon all of them.

An aggregate measurement on more than 1 estimator with performance better than random chance is better than any one of them.

Vorac
  • 690