Classes vs Prototypes

Courtesy: Antero Taivalsaari via Carnotaurus

Journal of Object Oriented Programming
Nov/Dec 1997 Vol, 10 No. 7
Antero Taivalsaari
Nokia Research Center, PO Box 45, 00211 Helsinki, Finland

Classes Versus Prototypes: Some Philosophical and Historical Observations


This paper investigates the historical and philosophical roots of object-oriented programming, taking a look at the history of classes and classifications, and presenting some critical remarks on the use of classes as the basis for domain modeling.  The article then compares the traditional class-based approach with the more recently presented prototype-based approach, and takes a look at some relevant research results presented in the field of cognitive psychology.  Finally, the implications on programming languages and design methods are discussed.

“Objects in the real world have only one thing in common – they are all different”.  – Anonymous.

In recent years, an alternative to the traditional class-based object-oriented language model has emerged.  In this prototype based paradigm [6,7,14,20,21,27,30,34 ] there are no classes.  Rather, new kinds of objects are formed more directly by composing concrete, full-fledged objects, which are often referred to as prototypes.  When compared to class-based languages, prototype-based languages are conceptually simpler, and have many other characteristics that make them suitable especially to the development of evolving exploratory, and distributed software systems.

The distinction between class-based and prototype-based systems reflects a long-lasting philosophical dispute concerning the representation of abstractions.  Plato viewed forms – stable, abstract, “ideal” descrptions of things – as having an existence more real than instances of those things in the real world.  Class-based languages such as Smalltalk, C++, and Simula are Platonic in their explicit use of classes to represent similarity among collections of objects.  Prototype-based systems such as Self, [34] Omega, [5,6] Kevo, [31,32], GlyphicScript [15] and NewtonScript [29] represent another view of the world, in which one does not rely so much on advance categorization and classifcation, but rather tries to make the concepts in the problem domain as tangible and intuitive as possible.  A typical argument in favor of prototypes is that people seem to be a lot better at dealing with specific examples first and then generalizing from them, than they are at absorbing general abstract principles first and later applying them in particular cases.

Protoyptes give rise to a broad spectrum of interesting technical, conceptual, and philosophical issues.  In this article we take a rather unusual, nontechnical approach and investigate object oriented programming and the prototype-based programming field from a purely philosophical viewpoint.  Some historical facts and observations pertaining to objects and prototypes are presented, and conclusions based on those observations are derived.


The central concepts behind object-oriented programming – classes, instances and classification – have been of interest to human beings for centuries. The earliest characterization of classes (types) versus instances was given by Plato over two thousand years ago, [24] Plato made a clear distinction between forms – i.e., stable, immutable, “ideal” descriptions of things – and particular instances of those forms.  Plato regarded the world of ideas as much more important than the world of instances, and concetended that forms always have an existence that is more real than that of concrete entities and beings in the real world. [ 24]

Research into classification ( to be precise: biological classification ) was continued by Plato’s student, Aristotle, who had an endless interest in understanding and organizing the world to its smallest details.  Whereas Plato was interested mainly in ideas and “eternal” concepts, Aristotle was the first philosopher to consider natural phenomena.  In his works – totaling more than 170 – Aristotle aimed at providing a comprehensive, detailed taxonomy of all natural things – plants, animals, minerals and so on. [1] His classifications were based on the same idea that underlies object-oriented programming today.  A group of objects belongs to the same category if the objects have the same properties.  Thus categories of objects are defined by common properties that a group of objects ( the extension of the category) share.  New categories can be defined in terms of other categories if the new categories have at least the same properties as the defining (“genus”) categories.  The general rule for classification can be presented as

essence = genus + differentia

In other words, categories are defined in terms of their defining properties and distinguishing properties.  This corresponds precisely to the idea behind traditional class-based object-oriented programming in which a class is defined in terms of its superclass (genus) and a set of additional variables and methods (differentia).

Aristotle’s work has led to the common idea, at least in the West and in many other cultures, that there is a single correct taxonomy of natural things – animals, plans, minerals, and so on.  Unfortunately, the level of categorization depends heavily upon who is doing the categorizing and on what basis.  In practice, people have many ways of making sense of things – and taxonomies of all sorts abound.  Yet the idea that there is a single universal taxonomy of natural things is remarkably persistent. [19]

(Aristotle realized himself that his model has problems and noted that many objects have “accidental” properties, i.e., properties that are characteristic of the object under examination but atypical for those kinds of objects in general.  Thus the actual substance of concepts was defined in terms of two aspects: the essence and the accidents.  This dichotomy has later inspired many researchers, including Fred Brooks. [10] )


Aristotle’s work on classication stood unchallenged for a long time. Categories were regarded as well-understood and unproblematic.  They were assumed to be abstract containers; things were either inside or outside the category.  The idea that categories of things are defined by common properties is not only our everyday folk theory of what a category is, but it is also the principal technical theory – one that has been with us for more than two thousand years [19]  Aristotle’s ideas have stimulated the work of many researchers, including, for example the famous Scandinavian natural scientist Carl Von Linne.

The Aristotelian “classical” view was first challenged in the 19th century by the famous British philosophers W. Whewell and W.S. Jevons.  They emphasized that there are no universal rules to determine what properties to use as a basis of classification of objects.  Furthermore, they argued that classification is not a mechanical process, but requires creative invention and evaluation.  Consequently, Whewell and Jevons argued that there are no objectively “right” classifications.  In light of this view, the task of constructing general rules for classification seems rather complicated.

Criticism of classification continued in this century by Ludwig Wittgenstein [35], who observed that it is difficult to say in advance exactly which characteristics are essential to a concept.  Wittgenstein gave several examples of seemingly simple concepts that are extremely difficult to define in terms of shared properties.  A classical example is the concpet of “game”. [35]  Some games involve mere amusement, such as ring-around-the-rosy.  In that game there is no competition – no winning or losing – thoug in other games there is.  Some games involve luck, such as board games in which a throw of the dice or draw from a card deck determines the next move.  Others, such as chess or water polo, involve skill.  Still others, such as poker or monopoly, involve varying degrees of both luck and skill.  The number of players may also vary considerably from one, as in solitaire, to hundreds, thousands or even millions, as in lottery or horserace betting.  There are also games in which no players are needed at all – such as the game of Life invented by John Horton Conway – but many people do not regard these as “real” games.

Another example of a concept that is hard to define in terms of shared properties is “work of art”.  Because no one can really define clear boundaries between what is art and what isn’t, there is no general class “work of art” with shared, common properties.  The definition is subjective and depends heavily on the situation or viewpoint (Tolstoy once made it a criterion of value for a work of art that it should be intelligible to everybody: “The significance of an object lies in its universal intelligibility”).

After defining a criticism of the classical model, Wittgenstein defined what can be seen as the origin of prototype-based programming: the notion of “family resemblance”.  Games do not have any shared, common defining characteristics.  Instead, they share a sort of family resemblance: Baseball is a game because it resembles the family of activities that people call games.  Members of a family resemble each other in various ways: they may share the same build or the same facial features, the same hair color, eye color, temperament, or the like.  But there need be no single collection of properties shared by everyone in a family [19].  Except for technical terms in mathematics, Wittgenstein maintained that for most concepts, meaning is determined not by definition, but by family resemblances.  Such terms can be defined only in terms of simlarity and representative “prototypes”.


Wittgenstein’s results have sparked research into so-called prototype theory.  J. L. Austin,[2] L. Zadeh, [37] and F. Lounsbury, among many others, have studied the area.  But it was Eleanor Rosch who introduced the prototype theory in the mid-1970s. [25, 26]  Rosch observed that her own studies and those of others demonstrated that categories, in general, have best examples ( called prototypes ) and that all of the specifically human senses play a role in categorization.  Thanks to the pioneering work of Rosch, categorization has become a major field of study within cognitive psychology.

In her criticism of the classical approach.  Rosch focused on two implications of the classical theory; [ 19]

o  First, if categories are defined only by properties that all members share, then no members should be better examples of the category than any other members.

o  Second, if categories are defined only by properties inherent in the members, then categories should be independent of the peculiarities of any beings doing the categorizing; that is, they should not involve such matters as human neurophysiology, human body movement, and specific human capacities to perceive, to form mental images, to learn and remember, to organize the things learned, and to communicate efficiently.

In can be shown relatively easily that the aforementioned implications are not typically true when people perform classification.  For instance, the fact that some instances are “better” representatives of categories than others can be confirmed simply by asking people to give examples of “numbers”.  Typically people [ excepting perhaps Claude and Russell ] respond with relatively simple integers such as 1,2,5 or 42 rather than -127.798432, 0x12abff4c, or 125-5i, although in principle real, complex, hexadecimal, or transfinite numbers would be equally good examples of numbers.  Thus integers ( and small integers in particular ) are , in a sense, better examples than other kinds of numbers.

Also, it can be proven rather easily that our background, mental capabilities and experience play a significant role in the classification process.  For instance, some people living near the equator are claimed to be uanble to distinguish between snow and ice, wheras the Eskimos have numerous words for describing different types of snow.  The Dani people of New Guinea have only two basic color temrs: mili (dark-cool) and mola (light-warm) which cover the entire specturm, and have great difficulty in differentiating between colors in more detail. [19]  A professional limnologist might be able to identify severa hundreds or even thousands of different animals living in the water, whereas a layman might recognize only a few dozen.  Also, classifications by persons who have substantial expertise in a certain area are typically much more refined than those created by less-experienced people ( conversely, people with little expertise easily make mistakes such as classifying whales and dolphins as fish, and so on).

In general, cognitive observations such as those above revealed some inherent flaws in the traditional classical model, and formed the basis for research leading into the protoytpe theory presented by Rosch and others. The essential results of prototype theory leading up to the cognitive models approach can be summarized as follows: [19]

o Some categories, such as tall man or red, are graded; that is, they have inherent degrees of membership, fuzzy boundaries and central members whose degree of membership (on a scale from zero to one) is one.

o Other categories, such as bird, have clear boundaries, but within those boundaries there are graded prototype effects – some category members are better examples of the category than others.

o Categories are not organized just in terms of simle taxonomic hierarchies, Instead, categories “in the middle” of a hierarchy are the most basic, relative to a variety of psychological criteria.  Most knowledge is organized at this level.

o The basic level depends upon perceived part-whole structure and corresponding knowledge about how parts function relative to the whole.

o Categories are organized into systems with contrasting elements.

o Human categories are not objectively “in the world,” external to human beings.  Many categories are embodied, and defined jointly by the external physical world, human biology, the human mind, plus cultural considerations.

It has also been shown that in many situations people perform classification on an almost totally ad hoc basis, creating unconventional and previously nonexisting structures on the fly for some immediate purpose.  Examples of such categories include:

o what to get for a birthday present

o what to do for entertainment on a weekend

o things to be taken from one’s home during a fire

For a detailed discussion on the cognitive and other observations and experiments that have lead to the development of the prototype theory, the reader is referred to the excellent book Women, fire, and dangerous things by George Lakoff. [19]

Albeit rather philosophical, the preceding discussion has some important implications for the world of programming.  In this section we present some thoughts and consequences that the theories of classification have on programming languages and software development methods.  Note that classification has been studied rather actively in the field of artificial intelligence, [ 9,33 ] but surprisingly, many object-oriented software designers seem to be almost completely unaware of the conceptual and philosophical background that underlies object-oriented programming.

Recognizing the limited modelling capabilities of object-orientation

As mentioned earlier, the programming models used in most object-oriented languages today are surprisingly similar to the Aristotelian classical model of the world.  For instance, object-oriented languages typically assume that new classes are defined in terms of shared properties, and that instances of a class always have an identical set of properties.  Furthermore, the class inheritance model used in most object-oriented languages closely resembles the Aristotelian way of defining new classes (categories) in terms of existing genealogical parents.  The inheritance hierarchies characteristic of object-oriented programs also bear a close resemblance to the Aristotelian idea of a single correct taxonomy of all natural things.

In philosophy it has already been show that the Aristotelian classical model has severe limitations when it comes to the modeling of real-world phenomena.  Taking into account the conceptual similarity of the classical model and the current object-oriented paradigm, it is therefore fairly obvious that today’s object-oriented languages have many of the same shortcomings when it comes to modeling the real world.  This is exemplified by the fact that many concepts and domains cannot naturally be modeled in terms of shared properties.  Examples of such “objects” include traffic jams, photons, water, the ozone hole, and the greenhouse effect.  If we want to use the current object-oriented paradigm to model concepts such as these, we wil have to explicitly resort to discrete, stochastic, or probabilistic simulation models in which the actual problem domain is first converted to a form in which objects with shared properties exist.  But the actual concepts themselves simply cannot be defined in terms of shared properties.

In most cases the limited modeling capabilities of the current object-oriented paradigm is not really a problem, because it usually suffices to have “good enough” models that describe the problem domain in a sufficiently rich level of detail.  Also, most business applications can be represented fairly easily by concepts that are defined in terms of shared properties.  Furthermore, despite the inherent limitations, the modeling capabilities of the object-oriented paradigm are much better than those of other well-known programming paradigms.  Consequently, the realization that object-oriented programming is not really capable of modeling the real world is more an observation than a real problem.  But what is important is that we should steer clear of claims such as “object-oriented programs directly reflect the real world” that have been surprisingly common in the past, at least in object-oriented marketing literature.

No optimum class hierarchies

Another implication of the Aristotelian classical model and its adoption in current object-oriented programming languages is the fact that there are no “optimum” class hierarchies.  This is easily seen in everyday design and implementation of object-oriented systems.  In many situations a class hierarchy that is very natural and intuitive from the conceptual point of view is not the most reusable, extensible, or time or space efficient one.  The most reusable or extensible libraries are not necessarily efficient or conceptually elegant.  And the most efficient libraries may lack both conceptual elegance and extensibility.  In general, the design of a good class hierarchy typically involves trade-offs.  Cook’s paper on the redesign of the Smalltalk-80 class library presents an interesting example of this phenomenon [13].  Also, the mix in style of programming [8] often leads to highly reusable libraries at the cost of reduced conceptual clarity.

An implication of the fact that there are no optimum class hierarchies is that the designers of object-oriented software should always be prepared for change.  No matter how well-designed the class library is, requirements may change in such a manner that substantia changes in the library are needed.  Consequently, there is a clear need for methods and tools that allow class libraries to be easily transformed from one form to another.  Such methods and tools have been investigated by several researchers, including Bergstein, [4] Casias, [11] and Opdyke. [22,23]

Consensus-driven design and “good enough” models

There are no optimum classifications, and therefore no optimum class hierarchies, which leads to the observation that there is no such thing as perfect design.  In other words, when designing object-oriented software, we should not spend too much time on trying to come up with a solution that would beet all desired requirements and criteria.  Rather, the design phase should be more like a consensus-oriented or consensus-driven process in which a group of designers aims at reaching a sufficient, or “good enough,” model of the problem domain.  A central goal of this process is to come up with a common vocabulary that will assist designers in subsequently communicating about the problem domain and discussing about their designs more efficiently.  This is far more important than the perfection of the design.  Also, it should be kept in mind that the requirements are likely to change and that iteration is typically needed ( this will be discussed shortly).  Thus good enough is usually enough, and spending additional time on design would just lead to work that could be wasted.  (Of course, deciding what is good enough is often very hard; Ed Yourdon has provided interesting insights into the topic in his recent paper on good enough software.[36])

Basic classes and the need for iteration

One of the central results of the prototype theory presented by Rosch and others [25,26] is the observation that not all concepts and categories are equal.  Rather, there are categories that are more basic than others and objects that are better representatives of categories than other objects.  These basic categories ( classes) and best representative objects are those that usually found first, wheras the more general and/or specific classes can only be deduced later when more experience from the problem domain has been gathered.

An interesting observation is that when categories are organized into taxonomic hierarchies, such as class hierarchies in object-oriented programming, the basic classes typically end up in the middle of the class hierarchy.  In contrast, those classes that are at the top (root) or at the bottom (leaves) of the hierarchies are typically of less interest either because they are overly generic or overly specific for the purpose of examination.

However, the implementation of an object oriented class hierarchy always proceeds (technically) from top to bottom; that is, superclasses must exist before their subclasses do. therefore, there is an inherent conflict between the classification process and the implementation of an object-oriented class hierarchy; the generic, more abstract classes can only be found when a substantial amount of expertise on the problem domain has been gathered. If the implementation of a class hierarchy is started a priori (i.e., before a sufficeint level of expertise has been reached) substantial iteration in the implementation of the library is inevitable, because subsequent experience is bound to reveal generalizations and new abstractions that will require changes in the superclasses. Alternatively, we could try to postpone the implementation until the final classification of the problem domain has been reached, but because we already know that perfect classification is rarely possible, this will not solve the problems in the long run.

In general, by using the Aristotelian philosophy and prototype theory, we can prove that the construction of object-oriented class libraries is an inherently iterative process. This fact has been presented informally by several researches and practitioners in the area of object-oriented programming. For instance, Johnson and Foote [17] have argued that abstractions are usually discovered by generalizating from a number of concrete examjples and that the abstraction process is likely to succeed much better if we have a lot of experience with the problem domain. That is why it is typically much easier to build good class hierarchies for graphical windowing systems and parsers than, for example, nuclear plants or solar systems. In general, useful abstractions are usually designed from the bottom up, i.e., they are discovered rather than invented, and will usually undergo a number of iterations until they become conceptually and technically satisfactory. The less experience we have with the domain, the more iteration is needed.


Not all object-oriented programming languages are class-based. There is an interesting category of object-oriented languages in which there are no classes at all. In this prototype-based object-oriented programming mode, all programming is done in terms of concrete, directly manipulatable objects that are often referred to as prototypes. These prototypical objects resemble the instances in class-based languages, except that the prototypical objects are more flexible in several regards. For instance, unlike in class based languages in which the structure of an instance is dictated by its class, in prototype-based languages it is usually possible to add or remove methods and variables at the level of individual objects. Other differences include that in prototype-based languages object creation usually takes place by copying, and that inheritance is replaced by some other, less class-centered mechanism. Self, [34] for instance, uses a mechanism called delegation [20] which allows objects to forward messages to other objects in case the current object does not know how to respond to the given message, thereby supporting the essence of inheritance: incremental modification. [12] Kevo [31,32] uses a mechanism called concatenation to reach the same goal.

Prototype-based languages are conceptually elegant and posess many other characteristics that make them appealing. These languages are also seemingly closer to the prototype theory presented by cognitive psychologists and philosophers. For instance, the ability to modify and evolve objects at the level of individual objects reduces the need for a priori classification and encourages a more iterative programming and design style. In general, when working with prototypes, one typically chooses not to categorize but to exploit alikeness. Rather than dealing with abstract descriptions of concepts (intensions), the designer is faced with concrete realizations of those concepts. Consequently, design is driven by evaluation in the context of examples: designers run their solutions to evaluate them in the context of some input to the program.

The change of focus in the design phase raises an interesting question: do prototype-based object-oriented languages help overcome the limitations of the Aristotelian tradition that constrains the modeling capabilities of the current class-based object-oriented languages? Unfortunately, this is not really the case. Most protoytpe-based languages of today are motivated by realtively technical matters. For instance, protoytpes are commonly used for reaching better reusablility through increased sharing of proerties and more dynamic binding of objects, or for providing better support for experimental programming. [34] In contrast, they do not usually take into account the conceptual modeling side, let alone pay any attention to the phoilsophical basis that underlies object-oriented programming. In a way, thus far the developers of prototype-based object-oriented programming languages seem to have been even more ignorant to these underlying conceptual and philosophical issues than, for example, the Scandinavian inventors of the class based-object oriented paradigm. [3,18]


Perhaps the only object-oriented language that comes close to the family resemblance model presented by philosophers and cognitive psychologists is Kevo. [29,31] Kevo differs from most other prototype-based object-oriented languages in the sense that it does not support inheritance or delegation in the traditiona way. Instead of these and other mechanisms that put a heavy emphasis on sharing and shared properties, Kevo objects are logically stand-alone and typically have no shared properties with each other. (Note that at the implementation level Kevo uses sharing extensively to conserve memory, but this is fully transparent to the programmer.) New objects are created by coping, and the essence of inheritance, incremental modification, is captured by providing a set of module operations that allow the objects to be manipulated flexibly. Late binding is used to ensure that the methods defined earlier can be overriden to extend existing behavior in an object-oriented manner.

To make it possible to perform modifications to objects not only at the level of individual objects, but also per group, Kevo uses a notion of object (clone) family. An object family is a system maintained group of objects that are considered to be similar. When objects are modified, the system implicitly moves objects from one family to another, or creates new families as necessary. For instance, when adding new properties to a window object, a new family of objects is created, unless another object with identical properties already exists. Conversely, if the added properties are later removed from the window object, the object will again return to its earlier family (provided that the family still exists). As the criterion of similarity, object interface compatibility is used, meaning that objects are considered to be similar if they have the same external interface/signature. In an ideal situation, object comparison should be based on behavioral compatibility; that is, ensuring that objects react to external stimuli identically, but in practice coming up with an algorithm that could determine 100% surely and efficiently whether two objects are behaviorally compatible is impossible.

Object families in Kevo have a conceptual relation to family resemblances presented by philosophers. When combined with the notion of stand-alone objects and the reduced focus on shared properties, this naturally leads to a design and programming style in which advanced classification and categorization have a lesser role. Yet even Kevo is still far away from the model presented by prototype theorists who argue that in modeling and classification subjective perceptions have a central role. There is some recent work on the area of subject-oriented programming that iams at taking into account the subjective factors in object-oriented design, but at this point this work is still mostly preliminary. [16]

A macintosh implementation of Kevo is available freely from Detailed information on Kevo is provided in the authors doctoral thesis,[32] which is also available electronically from .


We have given a brief overview of the historical and philosophical background of object-oriented programming, and examined the implications of these background issues on current object-oriented programming languages and methods. We recognized Aristotle as the conceptual father of class-based object -oriented programming, whereas the work of Wittgenstein has served as an inspiration for the alternative prototype-based approach. It was noted that in philosophy and cognitive psychology, the Aristotelian classical model has been abandoned a long time ago, whereas in object-oriented programming that model is still prevalent. This is not necessarily a problem, however, because typical business aplications can be modeled fairly well even with the limited classical model, especially if the designers are aware of the limitations of the classical model. It was also pointed out that current prototype-based object-oriented languages are poorly developed when it comes to taking into account the conceptual and philosophical benefits of the prototype-based approach, and that a lot of possibilities for future research in this area remain.


1. Barnes, J., Ed. The Complete Works of Aristotle, Vol. 1 (the revised Oxford translation), Princeton University Press, 1984.

2. Austin, J.L. Philsophical Papers, Oxford University Press, 1961.

3. Birtwistle, G.M., et al. Simula Begin, Studentlitteratur, Lund, Sweden, 1973.

4. Bergstein, P. “Object-Preserving Class Transformations,” A. Paepcke, Ed., OOPSLA ‘91 Conference Proceedings (Phoenix, Arizonta, Oct. 6011), ACM SIGPLAN Notices 26(11):299-313, Nov. 1991

5. Blaschek, G. “Type-Safe OOP with Prototypes: The Concepts of Omega,” Structured Programming, 12(12): 1-9, Dec. 1991


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s