Saturday, September 24, 2011

On Inventory Management

Abstract

Managing the availability of goods is key to the sustainable future of civilization.  Making better use of materials and resources, eliminating needless duplication, and improving reuse and recycling can improve the lives of individuals, families and communities.  This essay examines the flow and availability of goods from the standpoint of inventory management systems.  I give examples of current systems, look at an ideal goal, and propose steps that could lead to both immediate and long-term benefits.  


This essay is a work-in-progress and will be updated periodically.  Other  related essays concerning image processing and object recognition will be posted as they reach maturity.  

Background

In my youth, my father had a machine shop and lab in the five-car garage area of our house.  We parked the cars in the driveway.  As I grew up, I spent much time exploring the boxes, bins, cabinets, shelves and assorted containers, learning about the objects within.  I learned to use the tools, built projects and conducted experiments.  As an inquisitive nine-year-old, I had examined almost every object and attempted to divine its use.  I read the Newark and Grainger catalogs while I fell asleep.  I had a strong vocabulary and could describe and name most any tool or part. 

In particular, I could accurately describe the location of almost any item in the shop.  Many items had multiple homes, as duplicates were encouraged and often grouped by project instead of into simple bins.  I could clean a work area and put tools, parts and equipment back in their normal places.  I could disassemble and reassemble things ranging from toys to lawnmower engines. 

I was, in effect, an inventory manager with skills superior to any "professional" system in existence today.

Let us examine the features and requirements of inventory management and suggest techniques that might bring the capabilities in the high-tech world up to the level of a small child.

Requirements

Very simply, we must keep track of objects in time and space. 

We must have some general idea of what we mean by an "object", and ways of recognizing and remembering properties.  This implies a data entry system with a method of rapidly assigning properties to objects.  These properties may be descriptions from catalogs or data sheets, observed properties such as size or color, and arbitrary manually entered information. 

Tracking, in its simplest form involves only "Get this object from here and put it there" concepts.  Manual forms of data entry, and simple scanners might be sufficient as a first step.  An automated system would probably observe an area and recognize objects as they enter and leave. 

We should be able to answer questions like:
  • "Where is the nearest ...?"
  • "How many ... do we have?"
  • "How long have we had ...?"
  • "Where has ... been stored?"
  • "Is ... safe to handle?"
  • "Does ... need to be right side up?"
  • "What does ... attach to?"
  • "How does ... need to be stored?"
  • "Do we need to order more ... when this is used up?"
  • "Is ... more expensive than ...?"
  • "Is there anything special about ...?  Is it rare or valuable or dangerous or fragile?"
  • "What is ...?  What is it used for?"

The system must be so easy to use that it will be part of everyday life.  An assistant that can answer accurately when you wonder where you left the car keys.  A retail checkout system that does not need barcodes. 

The overall system must be tolerant of bad or conflicting data.  Over time everything should be generally self-correcting.

Do not necessarily require a "Parts in Bins" organization.  There may be preferred locations so that tools tend to wind up in tool boxes but it should not be carried to extremes.

The database should:
  • allow for object identification,
  • retain arbitrary properties,
  • track current location and location history,
  • group objects during storage or use,
  • allow for assembly and disassembly of composite objects
Visual Object Recognition

Current barcode scanners beep when they successfully scan an object.  As far as I am concerned, a proper scanner will only beep when it sees an object that it does NOT recognize.  I.e. we should eliminate the unnecessary confirmation noise.  Identification should be so accurate and so routine that the only thing that should need the user's attention is the true exceptions.  


A forthcoming essay will focus on the requirements of visual object recognition systems.


There is a range of requirements from the most basic detection of visual features within a background of clutter all the way through the comprehensive integration with a central object-location-tracking database.  This is required to ensure accurate identification of a particular object, not just the kind of object.  


Selecting the pencil laying on the notepad in front of you is almost always preferable to selecting an identical pencil from the pencil holder.  The history of the object is as important as its location, and, in general, history requires the combined recognition and tracking of multiple visual sensors.  

In a world of ubiquitous, distributed visual recognition systems such as foveal cameras, each camera develops a learned history of particular features that compose and are associated with particular objects.  The different histories ("experiences") of each camera means that their library of recognition templates will be unique.  And yet, we want to be able to assign the same "identity" to objects as they move from one camera's area to the next.  This implies that there should be an "object template description" that is both compact and sufficient to (more or less) uniquely identify a particular class of object. This data is what would normally be communicated with the central object-location database, and with other nearby cameras to aid in tracking particular objects from one station to the next.  

Consider: trying to locate a particular individual  using the cameras in a shopping mall.  Start with a general description such as "short, fat guy in a red suit".  This is actually a LOT of information expressed very succinctly.  It lops out most of the objects from your recognition database and allows attention to be devoted to the most likely suspects.  Maybe a candidate is seen from one point of view and you add to the description: "he has shiny black boots".  Motion tracking and adjacency ensures that this is the same individual.  You are building a more complete description.  Another view: "He has a white beard".  Multiple observers watching from different  cameras share the ability to casually recognize these high-level features and need ONLY the general location and compact description to be reasonably assured of success. 


Modern Examples

The inventory at a WalMart retail store is intended to be in near-constant motion.  Trucks with assorted merchandise arrive at the back doors.  Products are rapidly distributed to essentially arbitrary locations within the store for presentation to customers.  Customers roam the store selecting desired items.  Items thus selected are scanned, purchased and removed through the front doors.  Approximate item-counts are maintained by using a "delivered minus sold" algorithm, but this becomes so inaccurate over time that periodic physical inventories and complete overall reorganizations of the store are necessary. 

If I visit a hardware store I usually expect to be able to find a knowledgeable employee and say something like "I need a bigger one of these", or "This wore out and pieces broke off.  Do you have any more?", or "I need to mount this on a brick wall.  What do I use to do it?"  The employee is expected to be able to recognize my object and its use, match it against items in his experience using arbitrary criteria, and give me a meaningful response within a few seconds. 

Typical large companies manage warehouses and stock rooms with bins, shelves, cabinets, etc. and try to ensure that all like objects are collected in one place.  This facilitates locating desired items, counting stock, providing an appropriate storage environment and ensuring that replacements are ordered in a timely manner.  Frequently in-house part numbers are created and assigned to the storage locations to help with this process.  Unfortunately, there is often a many-to-many relationship which allows many different vendors to supply the product that winds up in a particular bin, and the same exact product may need to be stored in different locations due to convenience or necessity.  The computerized inventory system most likely attempts to enforce an idealized "one part number, one location, one quantity" paradigm.  More advanced or customized systems tend to be increasingly unwieldy due to special cases and exceptions and the need for more operator training. 

My wife appears to have a diametrically opposite approach.  I have often observed an apparent equal probability that a particular item will be in any of the boxes, bins, shelves, cabinets, closets, and drawers in the house.  This is actually not an outlandish situation if you have a good memory and are able to accurately and rapidly communicate enquiries and responses: "Where's the glue?" "Elmer's is in the bin under the bed.  Contact cement is on the top shelf of the linen closet."

On the International Space Station inventory items tend to be stored in bags within bags.  In the micro-gravity environment there is generally no need for rigid containers.  Objects can be stored compactly in collapsible bags, packed into storage spaces, and gently secured against air currents and slight accelerations using bungee cords.  Inventory access problems usually revolve around trying to figure out "which bag?", "where?", and "how do I get to it?".  This frequently involves lengthy conversations with the ground controllers who are responsible for trying to record ongoing activity and look up records from past operations. 

Homer Simpson has a garage full of lawn and garden equipment.  It is all labeled "Property of Ned Flanders".  The traditional view of this satire is that Homer is a thief who never returns items that he borrows.  I, however, would contend that Ned has simply found a way to store his excess inventory in Homer's garage.

Other Recognition Techniques

In these essays, I focus on visual recognition systems.  As a child, I was not so limited.  We had recently moved, so much of the material in the garage was packed in various boxes.  One day I was going through a box of vacuum tubes -- ancient electronic components that are kind of like fist-sized, glass transistors.  All of these tubes were wrapped in newspaper packing material.  As I picked up one of the wrapped objects, I knew immediately that it was not a tube.  It was a bottle containing about a half-pound of mercury.  In fact, I knew that it was mercury before I unwrapped it.  And I had no prior knowledge that we even had a container of mercury. 

This is an example of what I consider a proper an object recognition system in operation.  The object was manipulated by a sensitive tactile handling system.  The manipulation system safely transitioned from working with glass containers of vacuum to glass containers of mercury.  The feedback from the system instantly provided object-density estimates.  Movements allowed me to detect that the object was not only NOT solid, but that the contents were fluid.  Silence during the manipulation allowed me to deduce that this was not a jar of washers or nuts.  The fact that, during rotation, the center of mass did not shift as one would expect with a granular fluid also helped narrow the identification. 

The tentative identification became clearer as the bottle was unwrapped.  During this short time the entire system changed.  Dropping a newspaper-wrapped vacuum tube is a non-event.  Dropping a bottle of mercury is a whole different matter.  Even in the days before California-inspired paranoia concerning heavy metals, a child could be concerned about making a Big Mess.  My casual attitude became much more focused.  My grip became firmer.  My posture became more stable.  In short, the discovery led me from idle curiosity to attentive excitement in just a few seconds.  All without a single visual cue. 

Implementation

In light of this background, I contend that inventory management should be an automated, continuous, interactive process.  By "interactive" I mean that the inventory management system should physically interact with the items that it is managing, much as I did as a child, or as the hardware store employee does to become good at his job. 

This would allow the updating of inventory data to be treated as a routine maintenance operation instead of an inefficient, disruptive quarterly or annual event.  Managing objects in boxes (or boxes of objects) is only sufficient if there is a complete prior understanding of the actual, individual objects.  

Applications

Although I am describing this as Inventory Management, there are many applications.  The Inventory that we are managing need not be simply nuts and bolts.  For example:
  • Identify people and track their movement
  • Production operations in a manufacturing facility.  Time and Motion studies.
  • Ensuring "Appropriate Redundancy" of tools and supplies.  Not too many and not too few.
  • Transportation, Cargo and Freight operations.
  • Restaurants, Food Services and other Just-In-Time manufacturing
  • Produce tracking for food safety
  • Infrastructure Maintenance - Buildings and Utilities
  • Construction Industry - On-site Manufacturing and assembly
  • Health Care and Pharmaceuticals
  • Records Management - Customers, Patients, ISO 9000, etc.
  • Libraries and Collections 
And maybe we turn the whole thing around.  Make a geolocation system by mounting the cameras on some of the objects and use them to watch the surroundings.  No more reliance on a fixed infrastructure. Recognition of objects and places are just two sides of the same coin, going into the same database.  










Notes

Keep track of items in time and space.
Object identification - data entry, description, photo(s), size, mass, etc.
Object tracking - manual / automated
Object status -
            storage in bag, box, etc.; conditions (temperature, etc.)
            usage - quantity is partially used (count in / count out) liquids, aerosols, etc.
            assembly - object becomes part of something else
            disposal / recycling / disassembly - including damaged or incomplete items
            movement - new object / new location
Object query -
            find nearest
            find totals
            find expired (drugs, milk, etc.)
            find history - objects / locations
Must be so easy to use that it is ALWAYS used for Get and Store operations
Bill of Materials - Object associations or groupings
Nesting - recursive objects within objects
Inventory - continuous update / verification of object info when any storage location is accessed
            best if automated
            Important to detect unexpected objects
            Automated recognition.
Database -
            Object identification
            Object history
Photographic recognition
            Introduction process - controlled observation and examination
                        Multiple views and lighting
                        Unique feature extraction
                        Associate with similar objects
                        Establish photographic scale and allow scale invariance
                        Record markings or other identification features
            Do not require arbitrary categorization. 
                        Let the recognition engine make its own categories or groups.
            Do not expect perfect identification
                        "Narrowing it down" should be good enough
                        Combine with location history to complete the identification
Do not necessarily require "Parts in Bins".  Items can be anywhere.
            Preferred storage locations may help ensure (toolboxes, etc.) are properly stocked
Locations <--> Parts should be a Many-to-Many relationship
Tolerant of bad / conflicting data.  Generally self-correcting.
Examples
            Hardware store
            Borrow a cup of Sugar
            Craigslist
            Ned Flanders
            Tracking Santa Claus
Maintain orientation.  Don't spill it.
Disassembly -
            What is in it.  Hazardous?
            What is this part of?
            Survival inventory.  Motors contain coils of wire...
Expiration dates.  Use oldest first vs. Use freshest first.  Frozen bread anecdote.

Object history and current status.  A maid that always moves the dishes from the table to the dishwasher and THEN to the cabinet. 

Automated explorer manipulates objects to conduct inventory and cataloging.  Dangers include hazardous chemicals, high voltages, heavy/unstable objects, sharp tools, firearms, fragile objects, falls from high places, rotating machinery, buttons, switches and knobs, insects and pets.

Current systems and limitations
            Hospitality industry
                        Large number of identically furnished rooms
                        Maid service touches every object daily
                        Common maintenance, purchasing and disposal operations.
            Apartments, Condos and Tract homes
                        Many redundant objects
                        Progressively less commonality
                        Seasonal storage

The maid knows:
            Clean the nightstand
            Leave the lamp
            Wash the dishes
            Do not wash the paperback book

No communication regarding object location or availability
          You have to ask for it before the system will tell you anything
          System should be proactive and push appropriate information to to in advance of need

CraigsList
            Arbitrary descriptions are a problem
            Locations are hidden until a query is made

Monday, September 12, 2011

What is a Source File?

Abstract
Source Files are ubiquitous in the world of software development. Little thought is given to their core technology concept, which is now more than fifty years old. The transition from a desktop-PC to tablet-based computing environment represents a sweeping paradigm shift for software developers. Rethinking the true requirements of software from the Source level onward will allow more modern tools to enter the development arena. This essay presents an analysis of the situation from a historical perspective and proposes new methodologies for use in the future.

Introduction
The concept of the Source File is so fundamental and so ingrained in the lives of software developers that there is seldom any thought given to its true function.

Historically, the program source was simply the human-readable form of input to a language compiler, assemble or translator. The most important aspect of Source Code was that it be able to be fed to the language processor without generating errors. This meant that virtually everything about the process of program creation was geared toward simplifying the task for the compiler. Combined with the early data entry methods - Hollerith cards , teletype machines and paper tape - the idea of a program source became established.

And now we find ourselves, fifty or more years later, expecting program source to look like it could be printed on a teletype machine. Eighty columns (or so). Fixed pitch fonts. Nothing but ASCII characters.
The scope of programming requirements has changed radically from the early days of computing.

Programming languages and methodologies have evolved in many directions - some useful, others not so much. Our largest projects still suffer from the "simplify it for the compiler" mindset. A prime example is the ever-present header files used by most modern programming tools. These redundant and hard-to-maintain files are used to provide descriptions of the required linkages in modular programming. Newer programming environments attempt to deal with the header/linkage problem with added features or conventions that allow the tools to handle much of this drudgery. Handling the header problem is only one of many steps that need to be taken to convert from a "what is easiest for the compiler" to a "what is easiest and least error-prone for the programmer" mentality.

What Are Source Files?
A Source File is a collection of text (variously known a syntactic symbols or tokens) that are either:
  • Human language, as in comments 
  • Directives, usually for the top-level language processor that uses the particular Source
  • Programming Language, the statements or syntactic constructs that we think of as the program itself  
  • Literals, as in data which will be manipulated by the program but which is encoded in some fashion and stored within the Source.
Note that String Literals almost always contain some form of data that could be described as Source Code. For example, the classic "Hello, World!" program used in beginning programming classes is a program which manipulates data in another language - in this case, English. Most string manipulation in modern programs exists solely for building elements of another programming language, which will be fed to another language processor.

In addition, many Source Files today contain sections written in many different programming languages.  For example, an HTML file might include CSS, JavaScript, HTML and English.  Modern text editors attempt (with varying degrees of success) to help sort all this out for the developer.  Highlighting, auto-completion and various warnings are used to help prevent the (sometimes spectacular) errors that result from feeding one language into the processor that is expecting another.  
I will make a brief sidebar to rail against the very poor implementation of what is referred to as "internationalization".  If a modern application needs to be relevant to a world-wide audience it is expected that the developers isolate all locale-specific aspects into separate "resource" files, to use localizable operating system calls that can never be evaluated on the developer's system and use character sets and string manipulation tools to support character sets display, and keyboard modes that also cannot be tested on the developer's equipment.

Internationalization should be inherent in the program development process - not scabbed-on to a final product.  International test cases should always be visible to the developer and the operation and aesthetics of the product should be visible in a simple and robust manner at all times. To achieve this, the adaptive, multi-lingual keyboards on tablet computers, as well as the world-wide collaboration techniques discussed here become critically important.

Consider the notational nightmare that results from trying to pass a literal CSS property name to a JavaScript function that is being invoked from an HTML event handler.  The nesting of escaped quotation marks converts a (relatively) simple concept into something that requires deep understanding by the programmer.  And is thus virtually impossible for independent developers to modify or verify.

Unfortunately, one only gets help from the editor in the static situation such as the hand-written web page.  If the program needs to dynamically generate code (such as SQL queries) it becomes much more difficult to test, debug and verify.  There are virtually no tools that address this dynamic code creation problem, although it is one of the most common tasks performed today.  Many ad-hoc build-a-string techniques are used, but never with any consistency or robust certification.

How Are Source Files Used?

Source files seem to be used in four distinct but interrelated ways.
  • Design
In the Design phase the source is used as part of the collaboration and documentation for the project.  When properly done, this provides a good part of the framework that multiple developers use to ensure accurate understanding of the requirements and to convey details of the implementation as it progresses.
  • Edit
The Edit phase is where most human interaction with the source occurs.  This is where much current effort in man-machine interaction is focused.  Making better editors with greater knowledge of program internals and the possible intentions of the programmer has been the goal since the first Integrated Development Environments (IDEs) were created.

Big, fancy IDEs that need big, fancy displays are moving in the wrong direction.  As the computing world moves away from the PC-centric toward the tablet-based future, our interaction will become much more focused.  The display-and-keyboard will adapt to the operations that we really want to perform, and will retreat from the overwhelming offering of everything we might want to do.  Adaptive keyboards, touch screens and more compact, high resolution presentations provide opportunities to completely rethink the developer interface. 
  • Build
Ultimately, the purpose of the program source is still to create a working application program.  This means that the Programming Language parts of the Source must be submitted to a language processor.  In many cases, this is not a trivial operation.  Many separate functions or modules must be combined, their linkages validated and a final product produced.  This build process can be time-consuming and can result in errors that are the ultimate responsibility of diverse members of the development team.

Expecting the build process to be performed on individual desktop computers is something that needs to be reevaluated as we move into the 21st century era of cloud storage and cloud computing. 
  • Test
After a successful build developers generally expect to test the new application.  This may require transferring the newly created code to a test environment, which may be a particular hardware device or a software simulator.  There should exist a suite of test cases for the "finished" application.  Although some things can be automated, in many cases manual interaction and visual aesthetics will be important factors.  Handling these in a consistent manner, and ensuring that full testing is actually performed for each build or release candidate is a serious problem in the development process.  Maintaining and documenting both known-good and known-bad test cases is of critical importance and can be overwhelming for complex projects.

The source-level run-time debugger is one of the great advances in PC-based computing and is a major feature of all Integrated Development Environments.  Unfortunately, the multiple-programming-language nature of modern programs limits the actual usefulness of the Debugger.  While I occasionally use a debugger for certain compute-only functions, and neophyte programmers really benefit from the capability, I believe that the  problems of client-server interactions, dynamic scripts and cross-platform compatibility are more important challenges that cannot be addressed by a simple debugger.  Therefore, the debugger capability should not be viewed as being of critical importance when evaluating future development tools and environments.

I believe that every function or statement should have available an easily-accessible library of test cases.  This would be sort-of like using a current debugger to run to a breakpoint and then be able to examine and modify the data structures as they are processed.

Knowing the working-set (data structures that are accessed or modified) used by by any particular function is of critical importance to verifying the correctness of a program.  In general, the compilers know this, although sometimes it cannot be determined until run-time.  Unfortunately, this critical knowledge is never made available to the developer.

The test-case library would take the place of ancillary test programs used during the development process.  Currently, such test programs are created, as-needed, by individual developers.  They are never documented, maintained or shared.  Even worse, they are often discarded once the developer feels that his function is "working".

The concept of Assertions in various programming languages are used to catch errors at run-time when selected data does not match expectations.  I would extend the Assertion concept by allowing the capture of a function's working-set - at run-time - and saving it as an addition to the test case library.  This allows the collection of robust sets of real-world data.  When the function is modified or replaced with code that should be equivalent, these collected test cases can provide input to an automated verification process.

What Does the Future Hold?
 
"Keyboards" should be viewed as "Token Selectors" instead of letter-by-letter token builders. IDEs try to do this in some cases, but fail because of (1) the multiple-language problem, and (2) the "word-ish" nature of tokens needed to convey meaning to people.

A single font size doesn't fit all.  Nested structures could be displayed with smaller fonts, so you could literally zoom in to move deeper into the code.  Current tree-collapsing displays give an essentially useless all-or-nothing display.

Need more screen space?  Get a second iPad.  Properly done, this could not be more expensive than multiple monitors and should be much more useful in the general sense.

All source should reside in the cloud.  Current Version Control Systems make much ado about "Checking Out" and "Committing" changes to modules.  This makes it (intentionally) impossible for multiple developers to share work on individual modules.  Storing all development trees in the cloud and changing the checkout process from a "download it to my computer" to a "mark this group of changes as pending" in the cloud concept would be a great improvement.

All Build operations should be performed in the cloud.  There should be no need for any compute-intensive operations at a user's desktop.  This allows high-powered, dynamically-allocated resources to be brought to bear on what should be an independent background task. 

What Could Replace Source Files?

Given the ways current source files are used, the multiple-language problem, the need for collaborative work, and the trend away from desktop computing it seems obvious that we are actually referring to a Database application.  We have been using the computer's simple file system to store and access our programs, even though that has never been a particularly appropriate technique.

What is needed is a Cloud-based database storage system which is accessed by multiple Very Thin Clients (read iPad Apps) that provide the user interaction during the software development process.  This inherently allows world-wide collaboration among the development team.

The compile and build process is performed as a Cloud-based service which provides additional elements to the Database.  These elements would include error and diagnostic information, compiled code, linkage information and target application code.  Automated test cases could be run and results verified.  All these capabilities would be available to every development team member, without having to have separate instances of hardware, tools, environments, etc.

I have some very specific ideas about how such a specialized software development database would be structured, implemented and accessed.  Of course, it has nothing to do with relational databases, SQL, or  traditional Client-Server techniques.

 I Am Requesting Some Feedback

For now, I am interested in feedback from as diverse a set of developers as I can manage.

Specifically, please let me know how you develop you code.

What types of applications do you develop?

What hardware do you use?

What size screens do you need?

What applications do you need to run?  Editors? Compilers? Debuggers? Simulators? Browsers? Documentation tools? Email?  Chat?

How many things do you try to do at once?  How many things do you absolutely need to do at once?

If you were going to work using a tablet computer, what would you see as advantages?

What would be disadvantages, even given your idea of a perfect tablet-of-the-future?

What do you think would be impossible to do using a tablet?