Physical sensing as technique

Take a rope, mark it's length in feet, tie a weight to one end, drop it into a body of water to measure depth. Can a tool be any more any more basic than that? This technique is called depth sounding. if you were sufficiently methodical (and obsessive) you could make maps this way. (This has been done.) This method is tedious, gets you all wet, ruins your hands, and doesn't work for really deep water. How else to measure water depth?

By exploiting properties of matter, that's how. It is known that sounds propagates through water at about 1200 feet per second. The technique of echo sounding produces a brief pulse of sound in the water, which bounces off the bottom, reflects back to the ship and the time out and back is measured. A tiny bit of arithmetic (time out and back divided by speed of sound in water divided by two) and you have distance measurement, without rope. Though this is more complex than the rope technique, it's suitably done by machine. Once done, the two techniques are conceptually interchangable; they produce the same result.

The particulars of the techniques here are tedious and unimportant. Instead I'd like you to think about this as the more abstract, "higher level" problem of sensing. You "know" there is a bottom to a lake/river/ocean, but it is outside the reach of all of your bodily senses. By what extentions -- tools -- can you solve the problem at hand?

Return to the depth-sounding Wikipedia page, "Sounding", near the small picture of the 19th century frigate, and think about what the "script" between the leadsman and the captain up on the bridge:

[on the rope] marks were made [such] that it was possible to "read" them by eye during the day or by feel at night. The marks were at every second or third fathom, in a traditional order: at 2, 3, 5, 7, 10, 13, 15, 17, and 20 fathoms. The "leadsman" called out the depth as he read it off the line [to the captain, listening up on the bridge of the ship].

The leadsman (actor) senses the bottom depth, calling out the results to the captain (actor) who adjusts the direction of the ship's motion accordingly. It's a "loop"; leadsman calls out, captain adjusts, continuously and repeatedly. In tabular form, here's the task:

  1. Measure depth (leadsman).
  2. Call out depth (leadsman).
  3. Adjust ship's heading (captain).
  4. Repeat.

That's all I mean by "script". Conceptually, not too difficult I hope.

Note that there is no mention of time in this example. One can assume the "time scale" is human. Since you are a human you more or less automatically have a feel for how long it would take to pay out a rope, pull it back in, call out the number, etc. Seconds to a minute or three? Something like that. No big deal really.

Time does not exist in a computer. Time is another human abstraction! What does exist is sequence, (step 1, 2, 3, 4...). (Computers of course contain clocks and tasks can be tied to it, as desired.)

Paying out a rope and pulling it back in takes time (human, machine, otherwise) of course, and the physicality of it all brings informs our bodily sense of what the task involves, how much effort is required, and how long it takes. But electronic sensors, attached to a computer are often very fast, reacting in as little as millionths of a second. What if depth sounding with a rope could be (magically) done in one second? Or 1000 times a second? Is that even useful? Could the ship's captain assimilate information that fast? Even s/he could, would it help? Likely not -- ships have tons of mass, and react slowly. It is likely that making an adjustment to a ship's direction more than once a minute is a complete waste of time and energy. Sailors are certainly well aware of these characteristics, and other than "hurry!" within a human scale of time, there is no need to speed up the process 10, 100, or 1000 times. It [was] impossible and not likely to improve navigation.

When sensing (or acting) with a computer, you need to have some grasp on time. Referring back to the doorway light sensor in the Scripting a computer example, how often would you need to "look" at the light sensor to reliably "see" a person walking by? The answer is easily puzzled out if you literally act out the scenario with a helper. If you look once a minute, you will certainly miss nearly everyone. If you look 1000 times per second, you'll catch every mortal human.

For the simple task of detecting people walking through doorways, it's clear that as long as you look "often enough", it doesn't really matter how often you look. (In my experience, something around 50 times/second is more than adequate for slow events like that.)

If you are observant you might notice that the word "slow" above is a relative, and subjective, judgement. In daily life, as in depth-sounding with rope, the time scale of things seems obvious, because we all have the same human body. Computers are not bodies, but stupid sequential machines capable of great speed. Suddenly, a characteristic of events in the world that you have comfortably overlooked, now needs to be made explicit. This is often confusing and the source of problems conceptualizing solutions.

Though we will never approach this scale of things (speed nor complexity) consider the speed of events when you are reflecting laser light off the microscopic groove in a spinning music CD. The light reflected represents 44,000 digital events per second, and the time between each, thousandths of a millionth of a second, is absolutely critical. The complexity for decoding CD data is well beyond the scope of anything we'll do here, and i mention it only for scale.