Wednesday, June 06, 2007

Some Moore's Law Corollaries

Computing bang for the buck doubles every two years. That's a Moore's Law corollary. What is called "Moore's Law" today was presented simply as an observation in a paper by Dr. Gordon Moore, co-founder of Intel, back in 1965. The doubling he observed then has occurred for over forty years, and his observation is called a law today.

Perhaps some underlying natural phenomenon drives this process. Perhaps the observation was a self-fulfilling prophecy. Perhaps Moore's Law is a projection of the ever-quickening advance of technology. Whatever its cause, it continues.

Moore expressed his idea in hard physical terms. For a silicon chip of a given size, the number of transistors that can be put on it doubles every eighteen months on the average, he wrote. This doubling rate was later corrected to every two years. Since then it has remained constant.

Several nice things happen when transistors get more numerous in the same space.

First, they get smaller. This follows naturally. The smallness of the transistors on the chip allow products that contain them to get smaller. A computing gadget of today, like a USB WiFi adapter, is likely to be half the size it was two years ago.

Second, transistors get cheaper. Half the cost. The cost of a USB adapter is becoming trivial. Poor people can more and more easily afford to explore computer literacy.

Third, smaller transistors use less power. A computer that would have required an 80 watt power supply years ago can be powered today by a crank that charges a little battery. Computers can go places that power lines don't.

Fourth, transistors get twice as fast when they shrink to half size. As they get faster, more communications bandwidth becomes available and the cost of bandwidth decreases.

This increase in bandwidth happens at about triple the rate of the Moore's Law increase, according to observer George Gilder. Ever faster connections can be delivered ever more cheaply to the ever growing hordes of new computer users. Whether the market for new computers that connect to the web will eventually saturate is unclear. Gilder makes the point that as the cost of connection becomes trivial, data flow becomes free.

The cost of hard drives, optical drives, and even of blank cds and dvds seems also to follow the downward spiral.

As both data storage and the ability to move it from place to place become cheaper and cheaper, data disperses. Data wants to be free. A blank cd used to cost a dollar, and it had to hold something important. Today, a blank cd costs ten cents and can hold near-trivia, yet still be worth its cost.

One network theory (Metcalf) explains that every time a new node is added to a network - like a computer being added to a LAN or a new site appearing on the web - then all the other nodes on that network can link to it, and the whole network is enriched.

A network of [n] nodes has [n(n-1) / 2] possible links. Its potential for connectivity - and therefore its potential utility - increases as the square of an increase in the number of nodes. Triple the membership in your social club and you'll have nine times the fun. (With three times as many members, each member will have three times the fun. If they all get along together.)

If all those transistors on Moore's chip can be connected together, as in a "System on Chip", or SoC, then, as their quantity doubles, the systems that can be built on them take ever longer (and cost ever more) to design. Compound complexity raises its ugly head. The Master Plan develops too many bells and whistles.

Eventually we may learn how to let problems define for themselves the sort of system they need for their solution. Just throw ten billion transistors at a problem and let them sort themselves out. This works for life forms; it may work for digital forms as well.

What if the transistors, instead of increasing in count, got smarter and smarter? Suppose they learned how to network and swap jokes? Faced with a shared problem, would they develop specialties? Would they form cliques and develop political leanings? Would they discover primary guiding principles for developing real-world solutions, the closest that machines can ever come to the spiritual?

Is "Do unto others as you would have them do unto you" a computable result?

One may know sooner than one thinks.

0 Comments:

Post a Comment

<< Home