Thursday, July 21, 2011

Malevolent Social Engineering in Open Source Software

The following paper was written in October of 2010 and distributed in draft form to several infrastructure security organizations.  Not a single one made any response or gave any indication that they are considering the problem.  I now publish it openly for indexing by any search engines that happen along.  The date is 21 July 2011.  How long will it be before we hear about these types of attack on the evening news?

The examples are written in a form that many readers may enjoy.  I hope that I have conveyed some sense of the ease with which even quality programmers may be duped by crafty opponents.  Toward the end of the paper things get rather technical.  Don't worry -- when you get to parts don't understand you have probably gotten all you need. 

Malevolent Social Engineering in Open Source Software
Brian McMillin

A hypothetical attack on core open source software technologies is presented.  Extreme danger lies in the fact that these potentially compromised core technologies may be incorporated into an almost unlimited number of different application programs, unknowingly created and marketed by unrelated organizations which may be completely unable to determine if they are distributing malware. Mitigation strategies are discussed, although none are anticipated to be effective.

Open source software is an increasingly important method of developing modern applications and tools.  In many cases the collaborative work of different authors provides for new features and qualified review that would be impractical for any corporate effort.  The wide availability, ease of use, and inherent peer-review of open source packages makes them tremendously appealing to virtually all developers.

Unfortunately, it is this very collaborative nature and peer-review that opens the door for social manipulation and creating the illusion of quality and safety while masking malevolent software.

The term malware is usually used to mean intentionally malicious software designed to compromise a target system.  From the user’s perspective there is little difference between a system that fails due to deliberate machinations and one which fails simply due to buggy software.  Accidents and intentional attacks have the same effect.  In this analysis I treat both cases equivalently.

Social engineering is a term associated in the public’s mind with spreading computer viruses via email.  Disguising a threat with some desirable or benign coating (a picture of Martina Navratilova, or a valentine from a secret admirer) causes the user to circumvent the computer security system.  A threat that causes a panic reaction can thwart common sense: “Your computer has a VIRUS! Click here to fix it.” or “Your account has been suspended due to suspicious activity.  Click here to sign in and review your transactions.”

Social engineering in the open source software development community can take many forms.  Popular tools or packages, a friendly author in a forum, beta testing opportunities, web-based code snippet libraries - all can be the source of code which may fail to receive the scrutiny that it should.

Modern software systems are far too complex for any individual or organization to adequately evaluate, monitor, test or verify.  Malevolent code can be incredibly compact - sometimes requiring only a single character.


    Let me begin by emphasizing that there have been no known intentional examples of the use of these attacks to date.  The examples describe accidentally introduced bugs in actual software that could be devastating if carefully placed by a knowledgeable adversary.

Example: Single-Bit Date-Time Bug

The firmware for an access control system was found to contain a single-character typographic error which had the effect of rendering day-of-week calculations inaccurate.  The error was discovered during testing, only when employees were unable to enter the building on the first Monday of April.  Supervisor access was not restricted.  Troubleshooting revealed that the system believed that the date was part of a weekend.  The erroneous line of code is reproduced below.

    DB    31,28,31.30,31,30,31,31,30,31,30,31    ; Month Lengths

This example is particularly noteworthy because the error (substitution of a period for a comma) is actually only a single-bit error in the ASCII encoding of over 300K bytes of source code.  The keys on the keyboard are adjacent, and the difference in visual appearance of the characters is minimal (and was barely discernable on the displays being used).

The error was not discovered during initial development testing because it was dependent on future values of the real-time clock.  Code review did not catch the error because reviewers focused on verifying the “important” things - in this case the sequence of numeric values, and took for granted the punctuation.  The assembler software failed to indicate an error because an obscure syntactic convention allowed the overloaded period character to be interpreted as a logical AND operation between the two integer values.

The line between accident and malice can be quite fuzzy.  This ambiguity can allow a knowledgeable adversary to obscure an attack by hiding it among hundreds of lines of well-written, clean code.  Furthermore, since the malicious nature of any error can easily be explained as human error, the attacker remains free to try again if discovered.  The peer-review process may even be commended for finding and correcting the error, while giving the adversary additional information to improve the next attack. 

Example: Intel FDIV Bug

The Intel FDIV bug was an error in the floating point division algorithm in certain versions of the Pentium processor.  Apparently, the actual underlying error was confined to five cells in a lookup table that were unintentionally left blank. 

The effect was that software running on these processors would occasionally receive computational results which were in error after the fifth decimal digit.  Subtle errors such as this are extremely difficult to detect - in fact it took a skilled number theorist with great tenacity several months to isolate and demonstrate the problem. 

In any case, it would have been far easier to have certified the processor correctness at the design stage by ensuring that the lookup tables were computed and verified by multiple independent sources prior to production.  In fact, by the time the bug was publicized, Intel had already produced processors using the same algorithm which were free of the bug. 

Thomas R. Nicely discovered and publicized this bug during 1994, and in December of that year Intel recalled and replaced all affected processors.  In his analysis of this situation, he concludes:
Computations which are mission critical, which might affect someone's life or well being, should be carried out in two entirely different ways, with the results checked against each other. While this still will not guarantee absolute reliability, it would represent a major advance. If two totally different platforms are not available, then as much as possible of the calculations should be done in two or more independent ways. Do not assume that a single computational run of anything is going to give correct results---check your work!

The legacy of this bug is that, fifteen years later, many development tools for Intel-based software still include conditional code relating to the recommended work-around for this hardware error.  The mentality that justifies this as “Oh, that’s always been there”, or “It doesn’t do anything, but it can’t hurt” is a symptom of a larger problem.  Unnecessary code is always harmful.  At the very least it allows extra opportunities for undetected corruption to occur.  It fosters an un-critical mind set in the reviewer. 

Code paths that are never executed can never be tested.  Their presence in modern production code should be considered suspicious.  Hiding malware in such untested but ubiquitous code potentially allows for its wide distribution.  Dormant code such as this needs only a suitably crafted trigger to affect all compromised systems.

This five-value error caused an economic impact of $500 million to Intel in 1995, and is still being felt in unquantifiable ways today. 

As the Iranian nuclear program found out, to its own detriment, it is never a good idea to run unverified industrial control software on your black-market enrichment centrifuges.

The precision bearings tend to eat themselves for lunch.

What if the table errors in the floating-point algorithm were not as blatant as being left zeroed out? What if the tables contained a carefully selected number of random errors? And were embedded in the floating-point unit of a counterfeit GPS chip? And that chip happened to find itself in the terminal guidance system of an opponent's missile? And that the only effect was to change the rate of successful position updates from 100 per second to one per second? And the CEP (circular error probable) for the missile went from 3 feet to 3000 feet?

This kind of thing could win or lose a war.

And how could one ever expect to detect such a deeply-embedded, subtle attack?

How much would such an attack cost?

Would it be worth it for an adversary to try?

Example: NASA End-Of-Year Protocol

NASA space shuttles use a voting system of three active and two standby computers for flight control operations.  Each of these systems is intended to be essentially identical, and each runs identical software.  The intention is to detect and mitigate hardware failures, as these are deemed to be the most likely source of problems during a mission lasting two weeks or so.

Even so, flight rules prevent any shuttle from flying on New Year’s Day, since it is well recognized that the operating software cannot be positively certified to operate correctly when the year changes.  This is especially true when the shuttle orbiter is viewed as a small part of the much larger system involving communications, tracking, navigation, and planning systems which are geographically distributed throughout the world.  Ensuring that every component of this worldwide network will be free of anomalies when the year changes is viewed as an insurmountable problem and an unnecessary risk.

Example: McAfee Automatic Update

On April 21, 2010 McAfee Software released an update to its anti-virus software which incorrectly identified legitimate SVCHOST.EXE operating system files on Microsoft Windows XP systems as the W32/Wecorl.a virus.  Affected systems were locked in an endless reboot sequence and required manual intervention in the form of a local data load by a knowledgeable person to recover.

At least one police department instructed its officers to turn off their patrol car computers to protect them from the McAfee update.  It is unclear why every patrol car should have been running anti-virus software in the first place.  Much greater security and performance could be gained by closing the department’s network and installing proper protection at the gateways.

Anti-virus software is by its very nature a social engineering phenomenon.  The threat of malware and the lack of confidence in our legitimate operating systems and software has led us to the perception that we must install software which slows performance and causes unpredictable and non-deterministic behavior under normal circumstances.  The fact that perfectly good, working systems can have their behavior altered by anti-virus updates on a daily (or perhaps hourly) basis is, in itself, a source of great concern.

The fact that updates are allowed to proceed in an automated mode may be acceptable or even desirable for consumer products.  For dedicated applications or mission-critical systems there is little justification for automatic updates. 

Example: Adobe Flash Player

Much controversy attends the question of Adobe Flash player and HTML 5 features on mobile devices such as iPhone.  It has been claimed that the Flash player is buggy, a resource hog, and responsible for many system crashes.  The Flash player is a proprietary piece of software implementing a proprietary standard.  It is difficult to understand why the open source community, principally revolving around the Android operating system, seems to be more vocal in their support of Flash than Apple, who champions the open source HTML 5 standards. 

In reality, the controversy appears to be an example of social engineering, designed to allow a proprietary standard to maintain dominance in an evolving marketplace. 

It is true that MacroMedia (now Adobe) filled a real and important need by developing Flash in an era when no standard mechanism for animation or user interaction with computers existed.  The time has come for such ad-hoc early forays into user interfaces to yield to more mature, carefully designed systems that incorporate the best features discovered so far and meet the requirements of modern systems.

Proprietary systems will always be more vulnerable than open systems due to the limited resources and unknown business priorities of the controlling company.

Example: Zune 30GB Music Player Leap Year Bug

On December 31, 2008, all Microsoft Zune 30GB Music Players failed during the boot sequence.  The software that failed was the Real-Time Clock driver firmware for the Freescale Semiconductor MC13783 Power Management and Audio chip.  Near the end of the boot process, the driver was called to convert the internal Days and Seconds representation of the current time into Year, Month and Day.  On the 366th day of the year, the year-conversion loop would fail to exit, thus causing the device to hang permanently at that point.  The work-around was to allow the batteries to run completely down and to wait until the next day to restart the device.

The problematic driver software was contained in the rtc.c source file provided by Freescale Semiconductor to customers of its products.  The ConvertDays function was missing an else break; statement which would have correctly terminated the loop.  Using the normal formatting conventions adopted by Freescale, this would probably have added two lines to the 767 lines in this file.

A second function in this same file, called MX31GetRealTime, uses exactly the same loop structure for year conversion and includes diagnostic message outputs, apparently intended for verifying the calculations.  In the day 366 case, this code would output the (incorrect) message “ERROR calculate day”, and then break the loop.  In other words, if Freescale’s own diagnostics had been used to test the code there would have been a single suspicious message among a flurry of output, but the diagnostic code would not have hung.  If the real code had been tested or simulated on the correct date, the hang would have been discovered.

Note that the chip in question is called a “Power Management and Audio” chip.  Page 2 of Freescale’s Data Sheet lists 17 features for this chip, including battery chargers, regulators, audio amplifiers, CODECs, multiple audio busses, backlight drivers, USB interface and touchscreen interface.  The Real-Time Clock is item 13 of 17 on this list. 

It is clear that this is an example of a catastrophic bug in a “trivial” function, buried deep within mountains of code implementing “important” features.  This code was provided by a trusted supplier.  The features of the chip are so complex (and proprietary) that users (in this case, Microsoft) have little alternative but to accept the supplied code without exhaustive or critical examination. 

Example: Sony Root Kit

In 2005, Sony BMG Music released over 100 titles of music CDs that surreptitiously installed rootkit software on user’s computers running Microsoft Windows.  The alleged purpose of this rootkit was to provide copy protection for the music, but in actuality provided cloaking technology and a back door for malware.  Prior to legal action and the eventual recall of all Sony CDs with the XCP technology, over 500,000 computers were compromised.

The corporate mindset at Sony that viewed their own consumers as an enemy, stark terror in the face of declining sales, and a total naivety concerning computer technology left them vulnerable to manipulation by groups selling Digital Rights Management software. 

In the case of XCP, it also demonstrated that anti-virus services can be manipulated simply by the choice of names used by the malware.  Because it was being distributed by a giant corporation and was covered by the aura of anti-piracy claims, the anti-virus services spent more than one year allowing the infestation to grow.  This despite the fact that, in all respects, the software behaved maliciously by (1) being loaded from a music CD, (2) replacing system files, (3) cloaking registry entries and (4) conducting clandestine communications with a BMG host computer.

Sidebar: A Tirade Against Digital Rights Management Software

Digital Rights Management software may be viewed as malware, in that its purpose is to selectively block access to certain data or programs using arbitrary and unexpected rules.  Any software that behaves differently on one machine than another, or that works one day and not the next, should be viewed with great suspicion. 

DRM software is operationally indistinguishable from malware.  Test and verification of DRM software is, by its very nature, difficult for its own developers.  In addition, the presence of DRM features on a particular system makes the performance of that system essentially impossible to certify. 

Any software that cannot be backed up, restored, and made fully operational at an arbitrary point in the future should not be allowed in a professional development environment.  Software that includes timeouts, or that requires contact with a validation server is not reliable.  Any software whose continued operation is subject to the corporate whims of third parties is fundamentally unsafe.

Programs that include behaviors that are dependent on hardware identity (station names, MAC addresses or IP addresses), date - time values, random or pseudo-random numbers, and cryptographic codes are inherently difficult to verify.  If at all possible, these features, where required, should be carefully isolated from as much of the production code as possible.

Since there can be no universal guarantee of network connectivity or the continued operation of a central server (such as a licensing server), I would argue that any software that implements “time bomb” behavior or otherwise deliberately ceases to function if it does not receive periodic updates should be banned.

Experience has shown that DRM software is generally ineffective in achieving its stated goal, and causes undue hardship to legitimate users of the product.  Development efforts would be much more productive if they were directed toward improving the experience of all users, instead of trying to restrict some users.

Example: Physical Damage to Memory

In the late 1960's the DECsystem 10 used core memory for its primary storage.  There existed a memory diagnostic program designed to find errors in this core memory array.  The diagnostic proceeded to repeatedly read and write sequential locations.  It was found that this diagnostic would almost always find bad locations - even in known good arrays - and that entire rows would be genuinely bad after the diagnostic ran.  Investigation proved that the continuous cycling of the three Ampere (!) select current pulses were physically burning out the hair-thin select lines in the array.

The memory design engineers had known of this possibility, but discounted it as a failure mode because the system was equipped with a semiconductor memory cache that would prevent repeated operations on the same address.  Naturally, the designer of the memory diagnostic included instructions that explicitly disabled the cache. 

Forty years later, our most modern portable devices use high density NAND flash memory as their storage mechanism of choice.  Flash memory relies on the storage of small quantities of electric charge in tiny cells, and the ability to accurately measure that charge.  In order to store new values in this type of memory, entire pages must be erased and then sequentially written.  The 16GB flash memory used in the iPhone 4 (for example) stores multiple bits in each memory cell using different voltage levels to distinguish values.  The ability of these cells to reliably store and distinguish bits begins to degrade after only 3000 page erase cycles.  Elaborate hardware and software mechanisms exist to detect and correct errors, and to provide alternate memory pages to replace failed areas.  In order to achieve acceptable production and operational yields and longevity, modern error correcting systems are typically capable of correcting 12 or more bit errors in a single block.   Furthermore, wear-leveling algorithms attempt to prevent excessive erase/write cycles on individual pages. 

Unfortunately, the memory management algorithms both in Samsung’s memory controller and in Apple’s iOS4 are proprietary.  Not only are the specifications of the individual subsystems unknown, but the interactions between the two are cause for concern. 

NAND Flash memory suffers from a mode in which repeated reads can indirectly cause adjacent memory cells to change state.  These changed cells will trigger the error detection and correction mechanism and be generally harmless.  It is unknown whether there is a threshold where a large number of bit errors in a page will cause that page to be moved or rewritten, and possibly even marked as bad.  The possibility exists, therefore, that simply reading flash in a pathological manner may result in additional hidden erase/write cycles, or possible additions to the bad block table.

It is also unknown how bad blocks are reported from the hardware to the operating system, and it is unclear how the file system will respond as the available known-good storage shrinks.  Meaningful studies or empirical results are difficult to achieve because of the statistical nature of the underlying failure mode, the number of levels of protection, and the differing implementations of different manufacturers and products. 

All systems should collect and make available absolute, quantitative statistics on the performance of these error detection and correction methods.  We can have no real confidence in a system if we do not know how close we are to the limits of its capabilities.  One thing is certain: “It seems to be working” is a recipe for disaster.

It is not beyond the realm of possibility that suitably malicious software could clandestinely bring virtually every page of the system’s flash memory to the brink of ECC failure and then wait for a trigger to push the system over the edge.

This would be an example of software that can physically damage modern hardware, and leave the user with no recourse but to replace the entire device.


It would be preferable for the designers of development tools to strive toward the smallest possible set of features for the use of programmers.  By concentrating on the most frequently needed operations and making them clear and predictable the review process will be simplified.  Obscure or infrequently-used features should be only invoked with great fanfare.  Long keywords or elaborate syntactic requirements will draw attention to the fact that this code is not “business as usual” and deserves careful scrutiny.

Vulnerabilities, Exploits and Triggers

Traditionally, malware such as trojans, worms and viruses have relied on some vulnerability in a computer system’s design, implementation or operation.  Logic errors, unchecked pointers and buffer overflows are examples of vulnerabilities.  In general this vulnerability is independent of the exploit, or actual malware, specifically written by an attacker.  Once introduced into a vulnerable system, the malware may require an additional trigger event to begin malicious execution.  This allows infection of multiple systems to proceed undetected until a particular date, or remote command, causes the nefarious code to spring forth.  The trigger will always appear in the form of data within the infected system.

In the present analysis, the distinction between the vulnerability and exploit may appear to be blurred.  A sufficiently knowledgeable adversary may subtly introduce the entire body of malicious code into a large number of different application programs by patiently corrupting core technologies.  Using the definitions above, the actual vulnerability is the software design methodology itself, and the exploit could be virtually any piece of commonly used core software.

My primary thesis involves the social engineering that could be used to corrupt otherwise benign and robust software systems.  A secondary topic involves the acquired vulnerabilities that have evolved in software development “best practices”.  This involves using hardware and software features because “that’s the way you do it”, without any critical reexamination of whether those features actually make any sense in the year 2010, or in the application being developed. 

Several of these “evolutionary vulnerabilities” are readily apparent.

1.    The use of core open source frameworks by many completely unrelated applications.
2.    The programming style that allows and encourages interleaving of distinct objectives within “tight”, “efficient” or “multi-purpose” functions.
3.    The use of needlessly compact source notation without redundancy or cross-checks.
4.    The practice of allowing access to every data structure that a function MIGHT need to use without explicitly stating that access to a PARTICULAR structure is desired.
5.    Allowing the use of unnecessarily similar variable and function names.
6.    Operator overloading.
7.    Implied namespaces and namespace obfuscation.
8.    Conditional compilation mechanisms
9.    The inherent untestability of supporting multiple platforms.
10.    Unchecked and unconstrained Pointers.
11.    The Stack.
12.    Loops that do not look like loops - callbacks and exceptions.
13.    Dynamic code creation and execution - interpreters
14.    Portable devices that may operate unmonitored for extended intervals.
15.    Assuming that individual developers are experts in multiple programming languages.
16.    The vulnerability of different programming languages to naive mistakes.
17.    The lack of common version control systems among developers.
18.    The lack of a global cross-reference checking facility.
19.    The lack of inherent range and bounds checking at runtime.
20.    The lack of a central revocation authority.
21.    Automatic update systems themselves.
22.    The lack of a common threat analysis and notification system.
23.    The lack of a mechanism to track the installation of application programs in consumer devices.
24.    The lack of a mechanism to notify consumers of potential threats.
25.    The vulnerability of critical infrastructure to denial-of-service attacks.
26.    Trusted Software Developer Certificates that may be easily be circumvented by simply supplying that Trusted developer with malicious tools.

The Stack As An Unnecessary Vulnerability

Since the 1960's the use of a stack-based architecture has been considered a requirement for computer systems.  The stack provides a convenient storage area for function parameters, return addresses and local variables.  It inherently allows for recursion.  It makes exceptions and hardware interrupts easy to implement.  It minimizes memory use by sharing a single, dynamic area.

In the world of formal logic, recursion often represents an elegant and compact technique of explaining a complex operation.  In the world of computer software it is almost always a serious mistake.  There are a few cases in which recursion provides an elegant solution to a problem, but I contend that the risks of allowing universal recursive operations far outweigh the few instances in which any real benefit is derived. Anything that can be done by recursion can be done by iteration, and usually in a much safer and more controlled fashion. 

In the absence of recursion, the maximum calling depth can always be computed prior to execution of any given function.  In the best case, this could be done with a static calling-tree analysis by the compiler or linker.  In the worst case, the program loader must handle calls through dynamic linkages, and the loader must perform the analysis.  Knowing the possible calling tree implies that the actual maximum possible memory requirement can also be derived.  It thus becomes unnecessary to specify arbitrary stack space allocations.  Programs can be treated in a much more deterministic manner.

The fallacy of Mixing Data and Code Addresses - Modern hardware implements a single stack for each executable unit.  Programs use machine instructions to load function parameters and local variables into memory in the allocated stack area.  Call and Return operations use a program address placed in the same stack area.  This shared allocation is the vulnerability used by most “Arbitrary Code Execution” exploits.  It is completely unnecessary for the return address list to share a memory segment with function parameters and local data.  If this “conventional wisdom” were to be thoroughly reexamined, virtually all buffer-overrun exploits would be eliminated at the hardware level.  Data could still be wildly corrupted, but the flow of program execution would not be accessible to an attacker.

The fallacy of Necessary Recursion - The vast majority of functions in a modern application have clearly defined, static calling trees.  These functions have no need for any recursive features, and any recursion indicates a flaw.  The fact that modern languages automatically allow and encourage recursion means that recursion is an Error-Not-Caught in almost all cases.  It does not seem unreasonable to require that recursion (both direct and indirect) be indicated by some affirmative notation by the programmer. 

The fallacy of Saving Memory - The lack of static calling-tree analysis and the assumption of recursion means that arbitrary-sized segments are allocated to the stack.  Arbitrary allocations are always erroneous and lead to the mistaken impression that the software is reliable.  No one actually knows how close a system is to a stack overflow situation.  The presence of unnecessary memory allocation is a waste of resources and leaves a memory area where undetected malware can reside.

The contention that the stack architecture saves memory is one of the elementary explanations of the appeal of the stack.  This might be true if the alternative is a naive implementation in which all function parameters and locals were concurrently allocated from global memory.  Calling-tree analysis can be used to allocate parameter frames statically, and yet use only an amount of memory identical to the worst case of the actual calling pattern.

The fallacy of Hardware Interrupts - In order to achieve any degree of security, modern systems always switch stacks when a hardware interrupt is encountered.  Thus, it is not necessary that more than a rudimentary allocation be made in the application memory space.

The fallacy of Dynamic Stack Frames - Virtually all modern code computes parameters and pushes them onto the stack prior to a function call.  The functions allocate space for local variables by further adjustments to the stack pointer.  These dynamically-allocated stack frames are a source of needless, repetitive code that could be eliminated in many cases by static frame allocation and intelligent code optimization.  Again, static calling-tree analysis is used to determine the required allocation of these frame areas.

The fallacy of the Memory Dump - It is assumed that memory dumps can be a useful tool to allow crash analysis and code verification.  In reality, the use of the stack architecture and its immediate reuse of memory areas for consecutive function calls means that the internal state of any function is destroyed shortly after that function exits.  If the stack frames were statically allocated the system would tend to preserve parameters and local variables after the completion of any particular function.  The implementations of exception-handling functions (or the dump facility itself) could easily be marked to use frames outside the normal (overlapping) frame area.

The open source development community is an ideal place to implement advanced compiler / linker / loader technology that revises the calling conventions used by modern software.  Every application that operates unexpectedly when the calling conventions are changed is an application that was most likely harboring design fallacies that had been unrecognized.  Consider this an opportunity to radically improve all open source software with a single paradigm shift. 

Hardware and software systems have grown mostly by accretion over the years.  The goal has almost universally been expediency: make it run fast and get it done now!  Little thought has been given to mitigating common sources of error, except in academic circles. 

Much effort goes into testing, primarily to validate the interoperability of various software modules or systems.  In general the goal is to ensure that changes made to a new version do not break features of a previously certified application.

In the biological world, organisms develop resistance to antibiotics through exposure.  Malware - whether accidental or intentional - will grow and thrive at the boundaries of the test cases.  Such malware may spread in a benign form for long periods, only to be triggered into an active form by a possibly innocuous event.


It has been demonstrated that it will be essentially impossible to exclude the accidental or deliberate introduction of malicious behavior into software during its development and maintenance. 

Therefore, instead of trying to control humans and their behavior, it would seem reasonable to treat the software itself as the adversary.  If every line of code, piece of data and linked module was considered a threat it might be possible to develop high quality threat abatement tools that would have a better chance of success than other approaches. 

The open source community is the perfect place to develop such mitigation strategies.  Proprietary software development efforts lack the resources, and tend to hide, deny and fail to document vulnerabilities.  Open source developers have the opportunity to take both white hat and black hat roles.  Adding test cases that succeed or fail in different implementations is a valuable contribution to the robustness of any software.  Such continuing development of both code and validation cases should be the norm.  Improvement should be continuous and incremental, without the need for monthly “Critical Updates” or other disruptive strategies that are unevenly applied and of questionable effectiveness.

1.    Software development methodology
    a.    Require the Designer to provide complete natural-language functional specification document for all software systems, modules and functions, as well as example test cases.
    b.    Require software to be written exactly to specification by at least two independent development groups, none of which were the Designer of the specification.  Preferably this will be accomplished in different programming languages.
    c.    Disallow direct communication between independent development groups.
    d.    Resolve ambiguities and conflicts between implementations by changes to the specification document, incorporated exclusively by the Designer.
    e.    Require each development group to provide test cases which are not shared with other development groups.
    f.    Provide each development group’s software to a Validation group which is not privy to the specifications.  The Validation group runs
        i.    Stress tests with all known test cases,
        ii.    Stress test with random inputs,
        iii.    Stress tests with random structures and data types. 
        iv.    Stress test with all supported operating environments.
        v.    Expect all results to be identical from each group.
            (1)    This implies detecting all changes to global memory and confirming that they are allowed and intended.
            (2)    include range and sanity checks for all returned values.
    g.    Validation group will record all resource utilization, including speed, memory usage, and external communication.
        i.    Resource utilization, including external memory and references must be identical.
        ii.    Every failed validation must be documented and traced to its origin.  The nature of the original error must be identified and shared.  Repeated problem areas should be studied and mitigation methods developed.
    h.    One implementation will be chosen for production use, perhaps based on speed, compactness or programming language.  The alternative implementations will be available for validation testing of higher-level modules. 
    i.    New features and future versions will start with changes to the specification by the Designer and will end with comparison of recorded resource utilizations.
        i.    Any changes in resource utilization from one version to the next, especially global references, must be properly confirmed.

2.    Stick with one set of development tools.  Do not change the core library that your developers use every time a new release comes out. Validation and version control are needlessly complicated if third-parties can randomly revise any pieces of your software.

3.    Use a version control system that captures every piece of software, tool, source file, header file, library, test file, etc. necessary to build and test each release candidate. 
    a.    Build the final release version on an independent system with a clean OS installation using only the files extracted from the version control system.
    b.    At the very least, when the inevitable disaster strikes it will be possible to identify the versions of your software that are affected.

4.    Develop a runtime linkage system capable of swapping out implementations of a particular function or module on the fly.
    a.    In the verification process, this would allow the verification system to generate random switches between implementations and ensure continued correct operation of the system.
    b.    In the operational case, normally only one implementation of each function would be distributed.  This mechanism would allow for the distribution of software updates into running systems without requiring a reboot in many cases.

“What I tell you three times is true.”
The Hunting of the Snark
- Lewis Carroll

These suggestions may seem onerous, especially to small developers.  This type of approach can easily be implemented using only four individuals: Designer, (2) Developers and a Validator.  These roles may be traded for each different module or feature of a project.  Far from increasing effort or time-to-market, it could be argued that the improved documentation, cross-training and more robust final product actually reduce overall development effort.  New employees can be of immediate use and can be rapidly integrated into the corporate or community structure by assuming any one of the roles without the need for a lengthy training period.

Converting software to another language or porting it to different hardware will be greatly simplified by the comprehensive documentation and test cases inherent in this method.  Identifying the ramifications of bugs (detected by whatever means) will be more comprehensive and rapid if the development tools allow easy generation of a list of all software and modules that use a given feature.

Tuesday, July 19, 2011

On the Failure of Capitalism

The free market economy worked reasonably well in a 20th century world composed of multiple, competing nations and corporations.  The 21st century represents an entirely new challenge.  The unprecedented globalization and multinational aspect of every facet of the economy has led to a complete lack of competition in every important aspect of society.  This lack of competition removes the key protective feedback required to make capitalism a functional economic concept.

Caveat Emptor

All of the platitudes that our parents and grandparents tried to instill in us as guidelines for dealing with the world are rendered fallacious in light of insufficient knowledge.  All modern advertising and promotion actively prevents the customer from knowing what he is buying.  This allows the seller to redefine the transaction at his own discretion.  The customer is unable to compare offers from different suppliers on any basis other than marketing propaganda.

Obfuscation Wins

Electric and telephone companies, insurance companies, banks, automobile manufacturers all make their money by knowing more about their business than the customers.  These organizations are the only ones in a position to accurately compare the prices of their products, but they make their money by preventing accurate comparisons.  It is easy to make money by simply lying. You can use smoke and mirrors to cause your employees and customers to buy into, and spread, the lie.  And your competitors fan the fires of “freedom of choice” by adding lies of their own.

Literally no one knows the cost of a cell phone, a car, or a shirt at WalMart.  Every item comes with so many hidden costs, marketing gimmicks and sales incentives that it is impossible for customers, store employees, store managers or company shareholders to know anything at all about the actual cost of goods.  Without this information it is impossible to know if the management decisions and product selection are reasonable.  This argument does not even address the larger issues of personnel and overhead that play a major role in the profitability and sustainability of the retail environment.

Outsourced Middlemen

Virtually any aspect of modern business can be separated from the core business flow and outsourced to a third party.  Economic legerdemain can make this seem like a good idea.  The voice taking the order at the drive-up window of a fast food restaurant can come from a call center in Bangalore.  A company’s management should realize that well-meaning ideas such as this are fundamentally flawed. 

The overriding business case is that excessive outsourcing destroys the accountability required to understand the operation of your own business.  You no longer have the ability to examine the true costs associated with your operations.  With every organization trying to maximize their profit at all costs, each one will lie, cheat and steal to achieve that goal.  Now you have given every outsourced operation a reason to misrepresent themselves in an attempt to grow their own business at your expense.

Mob Rule

The reason that there are so many high-profile scandals involving Utilities, Finance, Government and the Press is that the leaders in these organizations form a closed society, feeding each other self-congratulatory stimulus that incentivises more and more egregious behavior.  Their actions are completely divorced from outside, objective reality.  The legality or morality of their actions are buried under the avalanche of self-importance and the compulsion to take the bit in their teeth and run faster toward the precipice.

The intelligence of a mob tends to be lower than that of any individual.  The behavior of these corporate mobs may lead to the ultimate failure of the global economy.

Cascade Failures

The world financial markets are so ill-conceived and mis-designed that they are ripe for a catastrophic cascade failure.  Once the dominoes begin to fall there are no mechanisms to prevent total collapse.  Propping up failed micro-economies with loans that (objectively) can never be repaid is nothing more that a paperwork band-aid covering a still-festering pustule.  Making the balance sheet look good from some arbitrary perspective just delays the inevitable.

If some nation is deemed Worthy, aid should be given by more prosperous economies with no strings. If that nation is able to grow and develop, it will become a useful member of the world community.  It is not necessary for the wealthy nations to engage in balance-sheet self aggrandizement while keeping an oppressive boot on the necks of the poor.  Economically successful partners represent a truly valuable return on investment that does not show up on any balance sheet. 

The clarity that would be generated by simply writing off these “loans” would represent a needed breath of fresh air in all our economic calculations.  Draw a line in the sand.  Evaluate the reality of the situation as it stands now.  Do not continue to rely on dubious promises of payment extracted under duress from failed economies.

An objective look at the world’s major economies would make for much more rational decisions.  Major aspects of the U.S. economy are not sustainable.  Take, for example, the automotive industry.  The entire business model requires selling more vehicles, and therefore enticing more people to sign on to the idea.  But cars last longer, the population declines, and operating costs escalate.

I argue that virtually no one today actually wants a car.  They want a video game console with leather seats.  They would leave off the car part if they could.  They do not want to buy it gas.  They do not want to insure it. They do not want to sign away their income to get it.  They do not want to sit in traffic with it.  The only reason that people buy cars, even today, is that they have been convinced that there is no alternative.

Soon the customer base will decide, en masse, that they really do not want or need a new car - despite ever-increasing marketing hype.

Without a rational plan, this will likely come as a Big Surprise to many investors.  And their reliance on idealistic projections with no basis or objective feedback may prove catastrophic.

The Chinese economy is another case-in-point.  Everyone seems to believe that China is set to become the dominant economy in the 21st century.  I find this hard to believe.  There is so much smoke and mirrors work involved in evaluating their business practices that I think that any reliance on such information is suspect.  Objectively, they have stepped up from a third-world economy by purchasing their place at the table of world powers.  In order to do so, they are selling their labor and resources for pennies on the dollar.  They can do this because they have a large population and geographic area.  But what they are doing is not in any way sustainable.  It shows no understanding of the shortcomings of 19th century industrialism.

Demand for Chinese goods may sag, whether due to natural disaster, economic turmoil or even a moral refusal to purchase goods manufactured with what amounts to slave labor.  Their reliance on the inflated promises of the financial wizards may well be catastrophic in the face of even a modest slowdown.

Feedback Limits

I have mentioned the loss of feedback that caused the economy to lose its equilibrium.  The equilibrium in question was the comfortable status quo of the late 20th century.  However, there are other, larger, feedback systems that will take over.  Even after a catastrophic failure, nature reaches a new balance - possibly in a completely unexpected configuration.  The fundamental constraints of nature provide the ultimate fail-safe. 

Adapt or Die

No matter how much wishful thinking and pious hand-wringing we engage in we must deal with the objective realities.  We cannot legislate success, or vote to repeal natural law.  We must act to cushion the impact of the failure of the current economic model, not continue propping up failed policies with ever-more-vehement rhetoric.

Buyer Rules

It is amazing all the clever (doubletalk) financing and (mumble) incentives and (ahem) volume discounts that a supplier will dream up if they are told, in advance, exactly how much the customer will pay.  The buyer must simply be realistic and refuse to be swayed by ANY escalation of the real bottom line.  Absolutely no contracts for the future -- purely pay-as-you-go.

Implementing this philosophy would solve the debt crisis facing our schools.  Very simply, we take the amount of current tax revenue and apportion it using exactly the same percentages as last year.  Salaries, maintenance, utilities and supplies.  All get the same PERCENTAGE of the revenue that they did before.  Extremely fair.  The electric company no longer dictates prices to the government.  Union workers no longer get arbitrary raises.  The customer (school district) takes back control of their own operation.  If the suppliers (employees and utilities) want the business, they are responsible for making it work.  If they do not want the business, there will be other, more efficient suppliers who will be able to step up.  No lay-offs or building closures.  No fear-mongering or threats.  No uncertainty.  Just across-the-board adjustment in payments.

What about the employees and suppliers?  Won’t they bear the burden?  No, they do not have to.  This is the ultimate trickle-down.  They will realize that they, too, can just say “NO!”.  They do not have to be at the mercy of ever-increasing costs.  They can actively regulate the prices that they pay – not simply settle for a choice between bad and worse.  This will re-introduce the balance that is missing when inflation is controlled only by arbitrary decisions by an elite few individuals, based on unverifiable currency market reports.

This would also work for ordinary consumers, IF....

Contracts and Lawyers

The sheer length of all modern contracts is an admission that the seller is pushing a product that the buyer does not want, will not like, or cannot afford.  Examples include cell phones and credit cards.  There is nothing reasonable or sustainable about these lock-in deals.  If the seller had a valuable product and he knew the customer would be satisfied there would be no need for draconian legalese.  The seller should always strive to keep the customer happy.  A happy customer will be glad to pay.  If the deal doesn’t work out (for whatever reason) all of these deals should allow the parties to walk away amicably.  Maybe the next deal will work out better.

Objectively, a home mortgage is nothing more than a rent-to-own contract in which the landlord (bank) doesn’t even have to maintain the plumbing.  Just because it is couched in twenty pages of fine print, and it has been marketed since the days of “Leave it to Beaver” as the American way, does not make it a reasonable deal.  The fact that the deal is fundamentally unreasonable is the reason that there is so much business for the courts today.

In all these cases, I propose that “the deal” be limited to what can be written on a single sheet of paper.  I will do this, if you will do that.  If it doesn’t work out, we walk away.  No recriminations.  No credit scores.  No lawyers.  No unhappiness.  Maybe we can make a better deal the next time.

I guarantee that the phone companies would be more responsive if every customer that they lost was a Big Deal, and not just part of business-as-usual churn.  And if the customers all left because marketing hype turned out to be lies, maybe the advertising would be more responsible.

Single page “contracts” that are easily understood would actually reduce the costs of virtually all aspects of doing business.  Even though some deals do not work out and represent a cost, it is hard to figure how those costs to a well-run business could possibly exceed the total cost of the legal system as it stands now.  There is nothing sustainable about making your own customers the adversary from day one. 

I do not believe that it is necessary to “Kill all the Lawyers” as Shakespeare so famously suggested.  It should be sufficient to simply let them starve to death.

Monday, July 18, 2011

The Challenge of Conventional Wisdom

Spacesuits and glove boxes

I believe that the space suits currently being used by NASA (and the Russians, for that matter) are in need of complete redesign. Spacesuits are intended to allow a person in a comfortable environment to manipulate objects in a nearby hazardous environment. This is exactly what a “glove box” used for sand blasting, electronic production or chemical handling does here on earth. Why not just invert the view and put the astronaut inside the glove box? This means we get rid of the “suit” concept. Why does an astronaut in zero G need his legs? And separate boots? Why can’t he take his hands out of the gloves? The biggest complaint that spacewalking astronauts have is that their hands get cold and tired.

My suggestion is that the entire torso and leg section of the suit be replaced with a rigid canister that the astronaut could rest inside. Perhaps when he “stands up” his head is positioned properly inside the helmet area so he can see, and his arms reach properly into the gloves. When he “sits down”, perhaps cross-legged, he has an area where he could eat, drink, relax, and so on. All he would really need is some sort of rigid grapple mounted on the outside of the chest area so he could clamp the suit in position to the structure he is working on. This would keep his motions inside the suit from starting a spin or other undesirable attitude.

A small hole in a glove is very dangerous. Currently, you would have to try to patch it against the escaping air. If you could get your hand out of the glove and apply a patch to the inside it would tend to be self-sealing.
This design would not be significantly different in volume than the current suits, so the power and air handling systems would be about the same. It would also fit through standard hatches and could be positioned by the same station or shuttle robot arms.

Why hasn’t anyone done a design like this? Because it cannot be tested, except in orbit. Astronauts are specifically trained for each EVA, simulating the situations they might encounter while working in space. Current suit designs are used extensively in EVA training sessions underwater. This simulates weightlessness in the general sense, but the astronaut is not weightless inside the suit. No astronaut could get valid experience using a glove-box type design until he was actually weightless. Therefore, any realistic design, experimentation, construction, testing, and revision would have to be done in space. There simply are not the facilities, resources, time, or personnel to do this work safely in orbit today. So we are left with the rather ludicrous legacy suits. They are designed to work in two completely different, incompatible environments and do a poor job in both.

A rapid prototyping facility in orbit would allow people in space to build the tools that they envision. And their vision will be completely different from the vision of engineers on earth. Using some of these Replicator concepts should help to make such a facility safe and sustainable.

* * *

Many of the systems in use today are the result of legacy discoveries or observations. In many cases entire industries have been founded on the basis of one particular observation or another. As these industries grow and become more ingrained in society, it becomes more and more difficult to rethink the basic premises. I am concerned that many of the foundations of modern society are based on such fundamental flaws that much of what is being produced today is at risk of catastrophic obsolescence.

I will give two areas of concern, broadly termed Auditory Systems and Vision Systems. I believe we must look to a future when all of our current multimedia industries will seem as though they were turning out daguerreotypes and recordings on wax cylinders.

Auditory Systems

Stereo audio systems are based on the simple observation that people have two ears. The assumption is that spatial discrimination is based on differences in timing and amplitude of the sounds heard by each ear. And you can create some real “Wow” effects by artificially increasing the separation of the two channels.

Unfortunately, this is not all that goes on in audio perception. In the real world, individuals interact with their environment. They turn their head. They use a multitude of cues to identify the location and nature of a sound. This is why surround-sound systems are increasingly popular. They are slightly more realistic.

Human beings are really very good at processing auditory information. Most people just do not realize it. You can tell the difference between standing in an empty room and one with furniture in it, just by the ambient sound. You can tell how close you are to a wall by the sound if you practice for a little while.

Your home theater system will have an audio “sweet spot” where the sounds are most like what the producers intended. If you move away from that spot the audio illusion does not travel with you any more than the visual one does. You get a distorted presentation and the visual cues and audio cues will be mismatched. Some people can take this in stride, like enjoying a roller coaster ride. It makes other people nauseous.

For our purposes here, the problem is that this audio information is insufficient. Each listener needs to be able to interact with the environment. Their own perceptual systems are unique. The spectral responses of their ears and the changes they expect as they move, breathe, swallow, etc. are all unique and affect the believability of the illusion in subtle ways. Just putting a pounding base line on a sub-woofer does not make a believable motorcycle engine.

Another area of glaring deficiency is the synthesis of voices. One would think that creating believable voices would be much simpler than visual animation. After all, you can recognize the person speaking over a telephone. Single channel, low bandwidth, digitized audio. This should be nothing compared to the data and bandwidth requirements of video.

Even after years of research and development there are only a handful of speech synthesizers that come close to reality. And they are extremely limited, carefully tailored algorithms. I expect a proper speech synthesizer to be able to accurately emulate any human being. If I want it to do Katherine Hepburn, it should sound exactly like I would expect. If I want Sean Connery, that is what I should get.

I should be able to simulate age, gender and accents. I should be able to convey emotion: fear, rage, lust. I should be able to yell or whisper. And foreign languages would be no problem.

Current speech synthesis cannot even get the prosody (rhythm, stress and intonation) of simple sentences right. I expect a proper speech synthesizer to be able to sing. I see nothing unreasonable in requesting my Replicator to produce a rendition of Pirates of Penzance as performed by Sean Connery.

In short, I may have a crew of hundreds of animators to synthesize Shrek, but I still need Mike Meyers, Eddie Murphy and Cameron Diaz. And the voice parts represent a tiny, tiny amount of data.

* * *

The flip side of this is speech recognition systems. After years of research, they are also a pitiful shadow of what they need to be. Again, we have a tiny amount of data to deal with. No more than eight kilobytes or so per second for digitized audio, like over a telephone. To be rendered into text at no more than about four bytes per second: call it forty words per minute, like a stenographer can do.

Developers discuss such concepts as “speaker independent” voice recognition systems. I contend that there is no such thing. The key to recognizing speech is to have a huge library of speech to compare a given sample to. Our illusion of speaker independence is created by our experience with thousands of different people: we are rapidly choosing among thousands of speaker-dependent patterns. If I were to stand on a street corner and have one hundred random people pass by and speak one hundred random, single words to me I would probably be very slow and inaccurate in my understanding. If, however, each of those hundred random people said a sentence, my brain would be able to pull out age, gender, ethnicity, accent and other factors which would be used to tailor my expectations and make my recognition system much more accurate.

My three-year-old grandson is in the process of building such a universal library for speech recognition. Everyone he hears speak, either directly to him, or in the background, or on television adds to his repertoire of context and recognizable words. He may not know meaning, spelling or anything else, but he will certainly be able to tell when his mother, father or Spongebob are discussing “crab fishing”. As far as he is concerned: same sounds, different speakers - no problem.

Knowledge of the speaker and the expectations that your brain derives from that knowledge is what allows us to pull a single voice out of cocktail party chatter or simple background noise. Tailoring expectations is also what makes it so much easier to understand someone when you can see their lips. The broader your experience and the larger your exposure to different speakers, the more likely it is that your brain will be able to choose a good template to match against the sounds it hears.

* * *

I believe that, in the long run, speech recognition and synthesis systems will be parts of a single whole. The speech recognition portion would have examples of Katherine Hepburn to tailor its expectations when analyzing her speech. The speech synthesis would be adaptable and would iteratively feed samples into the recognition system to see how well it approximated the expectations. Just the way a voice actor listens and experiments to learn an accent.

Adaptive systems such as this would make the man-machine interaction much more reliable by allowing the machine to automatically switch to the language pattern most easily understood by the user - for both speaking and listening. This would minimize the misunderstandings in both directions.

Vision Systems

Human vision is very good at spotting important details. We are descended from millions of generations of individuals who were not eaten by the saber-toothed cat. We can spot tiny clues to larger patterns hidden in the bushes. Sometimes, we see things that aren’t really there, but this is the safe option. To fail to see something that really was there could be fatal. As long as we are not too jumpy to find a meal and eat it, we will do OK.

This vision system is very good, but we can have some fun with it. Play some cute tricks. Every grade school child has seen a cartoon flip-book. Make your own little animated character. One still frame after another, your brain interprets it as a moving image. Over the past century, we have taken this trick and built entire industries around it. Movies and Television and Video Games.

The problem is that this flip-book trick bears essentially no relation to what is actually going on in our visual perception. Yes, it usually works. No, it does not really allow us to perceive all we could.

Our retina is designed to detect changes in light level. We see things when edges pass across the photo-receptors in the eye. If there was no movement we would lose the ability to see any patterns at all within a few seconds as our neurons reached a stable state. Therefore, our eyes are designed to always introduce motion. Tiny tremors known as micro-saccades. There are always edges or changes in brightness moving across our field of vision. The patterns and timing of those changes, coupled with the direction of the saccade are what allow us to recognize objects.

Unlike all current still or digital photography methods, our eyes have no shutter and the spacing of the light-sensitive elements is not uniform. The motion-sensing characteristic is what allows our eyes to function without a shutter. And the distribution of cells in the retina allows us to see fine detail as well as a wide-angle view simultaneously, without resorting to the zoom-lens concept. Even when we are concentrating on fine detail in an object, our peripheral vision is protecting us from lurking cats.

These fundamental differences lead me to believe that the one-still-image-after-the-other movie approach will be replaced with a more appropriately designed technology.

One thing to remember is that the motion detection within the eye is really fast. On the order of one hundred times faster than the frame rate in a movie. And your eye position is an interactive part of the perceptual process. If I see an edge move across the movie screen in my peripheral vision I will register one thing. But if I am looking straight at it the edge will skip over so many receptors that I usually just take it for granted that it moved smoothly. In other words, I can tell it is an illusion. Even on very fast frame rates, like an IMAX movie. A big part of the problem is that film tricks like motion blur (as the shutter speed reaches the maximum for the frame rate) just introduce blurred blobs to the retina. There is no sense of direction, just there and not there.
The retina is designed to help figure out what direction an object is moving, and it does so in conjunction with the pattern of micro-saccades and the movement of your body. This is what allows a batter to hit a major-league fastball. He can actually see the seams and accurately judge the motion of a spinning, 2.86 inch diameter sphere coming toward him at over 90 miles per hour. The total time between the pitch and the time the bat must make contact is less than half a second. There is little chance that anyone could hit a fastball if they were allowed to see only a video or movie of the pitch. It is easy to call it after the fact. But actually seeing it and getting the swing down in time is one of the greatest challenges in all of sport.

* * *

The digitization of video images leaves a lot to be desired. Black and white (panchromatic) movie film had excellent sensitivity, resolution and dynamic range. When this was scanned to create old analog broadcast television signals using an old-style “film chain”, much of this dynamic range was preserved. In particular, blacks were black and showed a smooth transition to white.

The same cannot be said of modern digital signals and displays, even “high definition” ones. Invariably the sales and marketing hype will emphasize the brightness of the image, or the sharpness of selected scenes. Much of this “Wow” factor comes from unnatural adjustments of color saturation to make the customer think they have been missing something with the older technology.

One key to spotting the limitations that I am talking about is to observe scenes with highlights and deep shadows. Invariably, the shadow will exhibit a “posterization” effect: you can see the contours of digitization steps where the intensity changes by a single integer step. Furthermore, you may be able to spot a “blockiness” in the shadows instead of smooth contours. This is an artifact of the MPEG compression algorithm. Dark shadow areas are also subject to a “crawling” effect caused by slight variations in the way the MPEG algorithm renders the region from one frame to the next. I contend that the presence of these types of artifacts indicates that this is a technology where the compromises required to “get it to market” have limited the range of material that can be produced for the medium. New productions won’t suffer because the directors and cameramen know that shadows don’t work. And the old films have now become incompatible with the new media.

This incompatibility is far deeper and more fundamental than the much more obvious and annoying things such as different frame rates and different aspect ratios. All craftsmen strive to achieve quality work within the limitations of their tools. The extraordinary effects that are achieved in one medium may be lost in another. Film makers who, for example, use only the center third of their frame simply because it might eventually be shown on TV are doing a disservice to both their vision and their audience.

* * *

There are no synthetic vision systems that take advantage of either the shutterless concept for motion sensing or the fovea-based idea to give simultaneous zoom and wide-angle performance. All because the conventional wisdom says that perception needs a static image of the whole scene. And the flip-book idea is good enough for movies.

* * *

Grade school children are also taught all about primary colors and color wheels. Red, green and blue: primary colors of light. Magenta, cyan and yellow: primary colors of pigment. Simple concepts. It is how color TV screens and computer monitors work. It is how digital cameras work. It is how four color (with the addition of black ink) offset printing works. It is how color film and movies work.

The only problem is that it is not how our vision system really works. We are told that there are red-, green- and blue-sensitive cones in our retinas. These are differentiated by three different photopigments within the cells. Upon closer examination, that is only true in the broadest sense. Ten percent of men have only two different working pigments, thus exhibiting red/green color blindness. Up to fifty percent of women have a genetic mutation that produces four different pigments, thus yielding better ability to distinguish subtly different colors.

Many animals such as birds not only have four different photopigments, their cones include specialized droplets of colored oil that narrow the spectral sensitivity of their cones and add to the ability to resolve subtle variations in color. One reason pets do not respond to photographs or television as we might expect is that they perceive colors differently. An image that appears photo-realistic to us will have a cartoon quality to the animals.

No matter how much I fiddle with the white balance of my camera or the gamma correction of my monitor I will never be able to come up with a setting that allows my wife and I to agree on a color match. The trichromatic color technology is fundamentally flawed and needs to be revisited in a thoughtful way. We need to transition to full-spectral imaging without an industry-wide upheaval.

* * *

Our poor, abused grade school children are also taught all about stereo vision and depth perception. It seems obvious. You have two eyes. The angle between the two as you focus on an object gives you its distance. Ties right in with your geometry class. The only problem is that the effect is far too small to be of much use beyond ten feet or so.

A much more important effect is the parallax of near-field objects against the distant background. You get two slightly different views and perceive it as depth. You can even see the effect yourself with a stereoscope or View-Master.

These stereo effects are all valid, but they do not tell the whole story. You can get depth perception with only one eye. All you need is near-field objects and some motion. When you drive a car your head moves around. The hood ornament or fender or some dirt on the windshield is all that is needed. Using only one eye you can judge distances, park properly, etc. Your brain is fully capable of figuring it out with little training.

I have observed the way a cat’s eyes work. In particular, they tend to have eyebrows and whiskers that droop off to the side of their eyes. A little thought on the matter yields the realization that whiskers as near-field objects and micro-saccades give a cat depth perception in their peripheral vision using only one eye. In other words, a cat can be intently stalking a meal, looking straight ahead, and still be aware of exactly how far it is to the nearby branch it is passing. The combination of motion-sensitive peripheral-vision (non-foveal) photoreceptors, micro-saccades, whiskers, the target object and the background gives a tremendous amount of information. Processed by a an astoundingly capable visual cortex, this information allows a level of perception only hinted at by the grade school explanation.

There are many other things at work here. Unlike the modern photographic approach, nature has not attempted to keep the field of vision flat. Distortions arise in the single-element lens and in the spherical curve of the retina. This is not a bad thing - rather it is used to advantage to gain additional information about a scene. The eye rotates about an axis between the lens and retina. As the eye moves, these distortions will help to accentuate and outline nearer objects against the background.

Again, the problem with this misunderstanding of visual perception is that modern technology is only taking advantage of a tiny part of the capabilities inherent in all of us.

* * *

These observations have wide-ranging implications. What will an advanced generation of display device look like? How can we make objects with full-spectrum controlled color? Kind of like some sort of super-paint. How will this affect art? What about this interactive, non-static motion-sensing business? How can I design my art so that I control your perceptions? Draw attention to certain parts on a consistent basis.

What can this tell us about pattern recognition? Things oriented at odd angles. Floating in zero gravity.

What about facial recognition? I can easily spot my wife in a crowd. It is harder to do in a picture of her in a crowd, since I don’t get any motion cues. It is really hard in a video of her in a crowd because the camera’s point of view is fixed and the resolution is very low. Unlike the real world, focusing on a particular point on the screen doesn’t make that area any clearer.

Implications for symbology:  Writing, fonts, markings.

Normal writing has a tremendous amount of redundancy.  Words are written in a much more complex way than they need to be to convey the minimum information.  Most English words, for example can be distinguished from one another by knowing only the letters they contain.  The ordering of the letters just gives more, redundant information. 

These words are easy to read.
Eehst dorsw aer aesy ot ader.

This is the principle behind the operation of the court reporter’s stenograph machine where each word is formed by simultaneously pressing certain keys. 

How can we combine this observation with what we know about vision systems?  How can we make fonts or markings more easily recognizable or less ambiguous?  If we contemplate weightlessness, how can we make markings easy to read no matter what their orientation?

An early Artificial Intelligence system, based on a neural network, was designed and trained to recognize different aircraft as either NATO or Soviet.  In the lab, it appeared to perform well but in the field it failed miserably.

The training was done using photographs from Jane’s Aircraft Recognition Guide.  Further research showed that in the training photos, NATO aircraft predominately flew right and the Soviet planes left.  The neural net, having no concept of the cold war, was simply figuring out which side of the picture had the pointy-end.

On Future Currency

Copies of Tokens

As all airlines know, there is nothing inherently wrong if they were to sell multiple tickets for the same airline seat.  Just as in the quantum mechanics example of Schrodinger's Cat, the world continues with a superposition of two (or more) people believing they will fly on the plane.  My customers are happy up until two of them actually show up and try to sit in the same seat.  At that point, the quantum wave function collapses and a single reality is restored.  One of the cats lives, the others die.  My only problem is dealing with the dead cats by explaining the fine print in the contract. 

Replicating Money

Likewise, I see nothing inherently wrong with copying currency.  U. S. paper money already has unique serial numbers.  In this age of telecommunications and cryptography this serial number concept simply needs to be extended. 

The Federal Reserve needs to issue notes with cryptographic serial numbers and maintain an on-line verification facility.  At any time I can have this master system confirm the validity of my note.  For a transaction I can exchange it for one or more brand-new notes with newly created numbers while simultaneously invalidating my old bill.  The key here is that the long, un-guessable number is the real “currency”.  You could print it on your own printer.  You could make backup copies to your heart’s content.  As soon as the first person uses the currency, however, all the copies are no longer valid.
By using a cryptographic aspect to the serial number I can still perform off-line operations, confirming that this is real, U. S. currency.  I just do not have absolute confidence that I can cash it in until I can contact the validation system.

This virtual currency is exactly what happens today with a personal check.  If I have $100 in the bank, I can still write $100 checks to each of five different people.  The first one to the bank wins.

Making the cryptographic serial number un-guessable is important, not only to allow some degree of off-line verification, but to prevent trial-and-error theft of other people’s cash.  This is similar to having a set of rules that can be used to make up new words.  The rules can be used to determine if something looks like a real word.  But you still need a dictionary to be sure that the word exists, and what it means.  The short serial numbers used today are an open invitation to forgers who simply print bills with numbers similar to a known good bill. 

For example, one could use a sixty-letter-and-digit serial number in place of the current eleven-character format.  This could give you a 300-bit value that would include a public-key signature to prove authenticity.
Now, suppose you want to send money to your daughter at college.  Just read the number off of a bill from your purse to her over the phone.  Western Union, eat your heart out!

You would want to be careful about leaving your cash laying around.  Someone could steal it just by copying the number.  But the same is true with modern currency - if you leave it laying around it will probably get stolen. 

You could carry your entire bank account on an encrypted thumb drive.  And leave a backup copy at home.  If it gets lost or stolen, you lose nothing.  And the “bills” that you spent before losing the drive are just “no longer valid” on the backup copies.  The encryption makes the lost drive useless to anyone who finds it.
Very few people can remember a sixty-character value after just glancing at it.  Lots of people, however, can remember the five to nine digits on the end of your checking account number.  I am surprised that more so-called identity theft isn’t perpetrated by simply printing bogus checks on account numbers you see at the grocery store. 

Doing bill-by-bill verification against a master database would not be inherently more difficult than current credit card verification.  There would be a higher volume of simpler transactions, though. 

Something similar to this is already in use by the U. S. Postal Service.  Various companies offer “print your own postage” programs.  These generate a barcode that is scanned by the post office to verify payment of postage.  The barcode can be printed or copied any number of times but the first time it is used (scanned at the post office) the other copies become invalid.

* * *

Now let’s see how this idea stacks up.
  • The Bureau of Engraving and Printing might or might not like it.  They do not actually get to print anything anymore and get turned into database administrators.  On the other hand, they were fighting a losing battle against the counterfeiters, anyway.
  • North Korea would hate it.  No more super notes.
  • The Secret Service would love it.  No more super notes.
  • Banks would love it.  They do not have to deal with cash anymore.
  • Retailers would love it.  A simple scanner could read the “bills” that customers could print on ordinary paper.  Just like handling coupons.  And no theft or robbery problems. 
  • Drug dealers would love it.  No more unwieldy suitcases of cash. 
  • The ACLU would hate it.  Big Brother might track the serial numbers. 
  • Your daughter would love it.  Instant party time.
* * *

This Verify-Invalidate-and-Replace strategy could be applied to almost any value token.  Airline tickets.  Concert Tickets.  E-Commerce specie. Each scenario would require a method for dealing with fraudulent sales, but detecting the fraud would be greatly simplified. 

A variation of this single-use verification and replacement strategy could be used to replace personal identifiers such as Social Security numbers, credit card or bank account numbers.  Using the ability to make duplicates invalid would prevent most of the current forms of identity theft. 

Note that what I am talking about here is not one of the elaborate cryptographic security schemes developed in academic circles.  Yes, it is a variation of a public-key system.  But, no, it does not require users digital signatures or encrypted transactions (beyond basic network communication security such as SSL).  It is not invulnerable, but it also does not suffer from the key management and key revocation issues that plague those academic systems.

The generally temporary nature of a particular token (“bill”) and the possibility of local “trusted issuing authorities” fit in with my general theme of Replicator Technology.  Each community, country or colony could have its own Issuance and Verification center.  This provides the local, autonomous operation that I look for.  And a simple communication link with other authorities could provide exchange of payments and allow honoring of another colony’s money.

Intellectual Property vs. Butchery

I tried to watch Alien on my cable channel’s Fright Festival.  What I saw was not Alien, but rather a derivative work: an original compilation by a nameless marketing director.  Short clips from Alien interspersed with thirty percent randomly selected advertising, periodic banners telling me that watching Alien now is not as important as the fact that Swamp Thing is coming up next, and a ubiquitous television channel logo. 

If I were Ridley Scott I would be mightily offended.  At no time did my vision for Alien include the juxtaposition of H. R. Giger art and "Girls Gone Wild". 

Advertising this show as Alien is false on two counts.  First, it is not just Ridley Scott’s Alien, as we are led to expect.  Second, implying it to be Ridley Scott’s deprives the cable channel's marketing director of his rights of authorship for the "original compilation work". 

In fact, if I had the stature of Ridley Scott, I might be tempted to enforce draconian licensing requirements on my works.  No butchery.  No editing.  No overlays.  No advertising.  No time compression.  And, in this age of high-definition, No Pan-and-Scan.  Basically, if you want my work, you will take the whole thing.  My way. 

Cable television needs the content creators a lot more than the content creators need cable television.  Direct sales and the Internet are much more lucrative than residuals for late-night TV.

Interestingly, Georgia O’Keeffe placed unique restrictions in the licensing for her wonderful paintings. She required that all reproductions be smaller than the originals.

This is a brilliantly simple use of the current copyright laws to protect the brand and preserve the wonder of the originals, while also allowing a wider audience to be exposed to her work.

Reflex Responses

In my youth, a friend of mine studied swatting flies.  He discovered that a fly sitting on a table is very good at evading a hand.  The fly’s reflex makes it turn away from the attack very quickly.  If, however, there are two attackers the fly is not nearly so successful.

Therefore, if you hold both hands about six inches apart, parallel to the direction the fly is facing and an inch or so above the table, and then clap above the fly, you will almost invariably hit and stun the fly.

The fly takes off when it senses movement, either visually or through air currents.  It will fly straight, turn left or right, or (interestingly) even backwards, but this attack defeats all those options.

The fly’s reflex is similar to the one that kills so many armadillos in the road.  The near-sighted armadillo would likely be safe if it just sat still.  But when it detects the car it jumps up into the passing undercarriage. 

On Hidden Feedback Systems

I used to teach scuba diving.  This led me to a study of the effects of breathing gasses under pressure. 

In the early twentieth century, J. B. S. Haldane came to the conclusion that different “compartments” of the body (different tissue types) dissolved and released gasses such as nitrogen at different rates.  His work led to the Dive Tables used by the Navy and sport divers today.  The tables give empirically derived values for the amount of time that a diver can spend at a given depth and the rate at which he can ascend to the surface without the risk of dangerous bubble formation in the tissues.

In my own analysis, attempting to work from first principles, I was never able to prove that any change in atmospheric pressure was safe. This is not a matter of physics or chemistry, or even biology, per se.

The human body contains many interlocking feedback systems, governing the distribution of oxygen, removal of carbon dioxide, balancing electrolytes and enzymes, maintaining temperature and controlling digestion and excretion.  All are the product of billions of years of evolutionary selection for individuals that did not die when a low-pressure weather system passed by.
People die when a biological feedback system hits its limit and is no longer able to compensate for current conditions.  The problem is that these feedback systems are completely hidden from view.  The empirical tests conducted on 20-year-old Navy SEALs do not generally apply to 50-year-old overweight smokers in a dive class. 

The Challenge of Obsolescence

Replicator concepts allow us to preserve the knowledge and techniques required to fabricate physical objects.  In many cases this will be quite valuable, but it must be realized that many objects are useless without their context.  As society moves forward and people use newer techniques to accomplish a goal the infrastructure of the past will fall by the wayside. 

Properly operating a steam locomotive is a frightfully complicated task.  Unlike most modern engines that essentially have only a starter and throttle, steam engines require careful attention to temperatures and pressures in the firebox, boiler and steam chests.  Water levels, fuel flow, lubrication, replenishment of fuel, water and sand also require the attention of the (usually two-man) crew. 

A steam locomotive operates differently under different weather conditions: temperature, humidity, rain or ice all significantly affect its operation.  The crew must anticipate changes in load caused by hills or curves.  The mechanism is very slow to respond to changes in settings and it can be extremely difficult to restore normal operation if any single parameter slips out of tolerance.  Operating a locomotive at 30 mph is a completely different thing than operating at 60 mph.  It took years of on-the-job training for an engineer to learn to safely control a steam locomotive. 

A Pennsylvania Railroad steam locomotive (#460, the “Lindbergh Engine”) pulling a tender, baggage car and coach car, is said to have reached a speed of 115 mph on June 11, 1927.  I am amazed by the skill of the crew, and the level of trust that they would place in the metallurgy, assembly and maintenance of the locomotive, cars, wheels, tracks, crossings, etc.

There is no reason to believe that a Replicated steam locomotive would not be fully functional.  But it was designed to be operated by a trained crew and run on properly installed and maintained tracks.  If I were to need to actually operate my new locomotive I would need to be able to replicate an entire infrastructure of physical objects as well as draw forth the knowledge and skills needed to use this technology. 

Many technologies have fallen into disuse - sometimes through obsolescence, sometimes through simple neglect.  The ability to use the technology that will eventually be embodied in the Replicator’s “Universal Library Of Everything That Has Ever Been Made” will depend on that library containing much more than physical construction specifications. 

We need to think about the concept of the Operator’s Manual in the same radical way we think about disassembly and recycling.  The manual needs to be actually useful in describing how the object is intended to be used and what standard level of background knowledge is expected of the user.  Making documentation for a truly broad audience does not mean copying the same unintelligible instructions into seventeen different languages.