------------------------------------------------------------------------------
Why computers need a new direction for system software,
and what this new direction is;
or
Why POSIX is bullshix, and what MOOSE must be
======================================================

   A computer is a machine that can do very quickly computations that each are
very simple. By combining these simple tasks, it achieves somehow more
complicated ones, but with great difficulty. It is thus very opposite to
human collaborators with this respect, who can do complex tasks, but are slow
and do a lot of mistakes when accomplishing simple computations.
   Software is the combinatorial part of computers. It allows all useful jobs
to be done, and all human-computer interaction to happen. But every computer
has some primary piece of software that serves as a platform for other
software to be run upon: this is called system software.
   This paper aims at demonstrating that while currently available system
software provide a lot of *expedient* services, their low-level structure
forbids them to provide *useful* services, and leads to huge, inefficient,
unusable, software. This paper proposes explain why *system* software, and
not "user software", must include some (well-known) techniques to achieve 
real usefulness.



1) Current state and direction of system software
-------------------------------------------------

   It is remarkable that while since their origins computer have grown in power
and speed at a constant exponential rate, system software only slowly evolved;
It does not offer any new tools to master the increasing power of hardware, but
only enhancements of obsolete tools, and new "device drivers" to access new
hardware. System software becomes fatware (a.k.a. hugeware), as it tries
to cope differently with all the different users' different but similar
problems. It is also remarkable that while new standard libraries arise, they
do not lead to reduced code size, but enhanced code size, to take into account
all the new capabilities added. It may be said that is computing has been doing
quantitative leaps, but has not done any comparable qualitative leap; computing
grows in extension, but does not evolve toward intelligence; it rather becomes
more largely stupid.
   As a blatant example of the lack of evolution of system software quality
is the fact that the most popular system software in the world (MS-DOS) is a
twenty-year old thing that does not allow the user to do either simple task,
or complicated ones, thus being a no-operating system, and forces programmers
to rewrite low-level tasks everytime they develop any non-trivial program,
while not providing trivial programs. This industry-standard has always been
designed as a least sub-system possible for the Unix system, which
itself is a system made of features assembled in undue ways on top of only
two basic abstractions, the raw sequence of bytes ("files"), and the ASCII
character string; Unix has had a huge number of new special files and
bizarre mechanisms added to allow access to new kind of hardware or software
abstractions, but its principles are still those unusable unfriendly files
and strings, that were originally thought as a minimal abstractions to fit
the tiny memory of then existing computers, and not as an interesting
concept for today's computers; now known as POSIX, it is the new industry
standard OS to come.
   The other tendency in widespread OSes is to found the system upon a
large number of human interface services, video and sound. This is known as
the "multi-media" revolution, which just means that your computer produces
high-quality graphics and sound. But that is only allowing direct use of
newest hardware; software design, a.k.a. programming, is not made simpler
for that, but much harder: while a lot of new primitives are made available,
no new combinatorials are provided that could ease their manipulation. Thus
you have computers with beautiful interface, but that cannot do anything new;
to actually do interesting things, you must always write everything from
scratch, which leads to very expensive low-quality slowly-evolving software.
This is the cult of external look, of the container, instead of the
internal being, the contents; this problem does not concern only the computer
industry, so the why of such tendency is beyond the scope of this paper; but
one must be conscious of the problem anyway. The container may seem important
when one hasn't used a computer much; but if you really use computers
regularly, you'll see that a container improvement is useless unless a
corresponding improvement of the contents has been done.



2) Reuse versus Rewrite
-----------------------

   99.9% of programming time throughout the world is spent doing again and
again the same basic things. Of course, you can't escape asking students to
repeat what the elders did so they can understand it and interiorize the
constraints of computing. The problem is student programming represent less
than 50% of programming time, which means even professional people are
spending most of their time writing again and again new versions of earlier
works, nothing really "worth" the time they spend -- moreover, new work is
often done by students or their elder equivalent, researchers, which aggravate
the time share professionals spend doing things really new.
   Now, after all, you may think that such a situation creates jobs, so is
desirable; so why bother ?
   Well, rewriting is a serious problem for everyone. First of all, rewriting
is a loss of time, that make programming delays quite longer, thus are very
costly. More costly even is the fact that rewriting is an error prone
operation and anytime while rewriting, one may introduce errors very difficult
to trace and remove (if need be, one may recall the consequences of computer
failures in space ships, phone nets, planes). Reuse of existing data accross
software rewrites, and communication of data between different software proves
being of exorbitant cost. The most costly aspect of rewriting may also be the
fact that any work has a short lifespan, and will have to be
rewritten entirely from scratch whenever a new problem arises; thus
programming investment cost is high, and software maintenance is of high cost
and low quality. And it is to be considered that rewriting is an ungrateful
work that disheartens programmers, which has an immeasurably negative effect
on programmer productivity and work quality. Last but not least, having to
rewrite from scratch creates an arbitrary limit to software quality, that is
no software is better than what one man can program during one life.

   Therefore, it will now be assumed as proven proved that code rewriting is a
really bad thing, and that we thus want the opposite: software *reuse*. It
will be showed that such reuse is what the "Object-Orientation" slogan is all
about, and what it really means. But reuse itself introduces new problems
that have to be solved before reuse can actually be possible: how can we
reuse software without spreading errors from reused software, without
introducing errors due to misunderstanding or misadaptation of old code,
and without having software obsolescence ? Let's see what are possible
reuse techniques, and how they cope with these problems.
 


3) extended libraries vs better grammar
---------------------------------------

   The first most common way to reuse code is to rely on standard libraries.
You wait for the function you need to be included in the standard library
and use it as the manual describes it when it is finally provided. Unhappily,
standards are long to come, and are longer to be implemented the way they are
documented. By that time, you will have needed new non-standard feature, and
will have had to implement them or to use non-standard libraries; when the
standard includes your feature, you'll either have a non-standard program,
or will have to rewrite your program to conform to the standard. So this kind
of code reuse isn't the solution. Moreover, it relies heavily on a central
agency for editing revised versions of the standard library. This does not
mean that no effort must be done to build such a library, as a library greatly
helps communication.
   It's like vocabulary and culture: you always need people to write
dictionaries, encyclopaedias, and reference textbooks; but these people just
won't invent new knowledge and techniques, they only establish means
to communicate existing ones more easily. You still need other people to
create new things; you just can't wait for what you need to be included in
the next revision of such reference book; and it won't be if someone doesn't
settle things before they can be considered by the standardization commitees.
   Now how will a dictionary be used ? How will the creating people, or even
the common people work ? They won't just use isolated words from the
dictionary; they will combine such words. And that is grammar -- the structure
of the language. So a library is great for reuse, but actually, a good grammar
is essential to use itself, and reuse in particular. Thus what does reuse mean
for the language grammar ? It means that you can define new words from existing
ones, thus creating new contexts, in which you can talk more easily about your
particular problems. The grammar should make it clear what your sentences
mean given the context, while there should be some way that people with
different context backgrounds, but talking about the same problem will be
able to communicate anyway to help each other. For the same problems always
arise in multiple places at the same time, and different contexts are always
built to solve them; and you don't build contexts just for the sake of building
contexts, but to solve the problems; thus, before a standard context is settled
than solves the problem, you more than ever need being able to talk accross
different contexts, i.e. different extensions to standard libraries.


   It should stressed that computer languages have nothing to do with finished,
static "perfect" computer programs -- those can have been written in any
language, preferably a portable one (that is any ANSI supported language,
i.e. surely "C", even if I'd then prefer FORTH). If all interesting things
already had been said and understood, there would be no more need for a
language; but there are infinitely many interesting things, so a language
will always be needed, and no finite dictionary will ever be enough. Computer
languages have to do with programming, with modifyings programs, creating new
programs, not just watching existing ones.
   Thus, the qualities of a (programming) language do not lie in what can be
done with the language (the language being hopefully Turing-equivalent with
libraries to access all the hardware  (i.e. able to express anything), nor
in the efficiency of a straightforward implementation (i.e. easy access to
beginners), as later a good "optimizing" compiler can always be achieved
and/or speed critical routines can be included in libraries (i.e. if you
really need a language, then you won't be a beginner for a long time). Those
qualitise lie in the easiness to express *new* concepts, and to *modify*
existing routines. With this in mind, a programming language is better than
another if it is easier for a human to write a new program or to modify an
existing program, or of course to reuse existing code (which is some extension
to modifying code); a language is better, if you can sentences of equal
accuracy are shorter, or if just if better accuracy is reachable.



4) Repeating things
-------------------

   The second most simple way to reuse code is just to copy-paste it, then
modify it to fit one's new particular purpose. This is like copying whole
chapters of book, and changing a few names to have it fit a new context.
Now, this method has got many flaws and lacks, together with a moral objection.
   First of all, copying is a tedious method; if you have to copy and modify
the same piece of code thousands of time, if can prove a long and difficult
work. Then, copying is an error-prone method: nothing will prevent you from
doing mistakes while copying or modifying. And lastly, bugs and lacks of
feature in the copied piece of code are spread accross the system; and upgrades
are thus made very difficult. So this method is definitely bad for anything
but reuse of mostly identical uncommon code in a completely separate program.



5) Centralized code
-------------------

  The next way to reuse code is to have written code that will include
tests for all the different cases you may need in the future and branch to the
right one. This is some kind of library making, but much more clumsy, as a
single entry point will comprise all different behaviours needed. This method
proves hard to design well, as you have to take into account all possible
cases to arise, with predecided encoding, whereas a good encoding would have
to take into account actual use. It is slow as you must test many uncommon
cases; it is also slow and uneasy to use, as you must encode and decode the
arguments to fit a one entry point's parameters. It is very difficult to
modify whenever ; it is clumsy, as a single piece of code, slow, uneasy to use, and/or
produces inefficient code. Moreover, it is very hard to anticipate one'sfuture needs.


; this is known
as client-server architecture, which is a very primary concept which
some software vendors claim to be proud of providing, whereas it is a
very stupid method with moderate performance.
Translating software interface from library to server is called multiplexing
the stream of library/server access, while the reverse translation is called
demultiplexing it. All this is only useful for networks with low parallel
processing; it's only advantage is its simple implementation (i.e. low
development cost), but use and maintenance is expensive.

  A variant that combines both previous methods is to group all those similar
code fragments into a library, so that you can both understand all future
cases (well, you can still modify this library if you need it in the future,
but we then see that this method is no heal to a bad designed *language*),
and duplicate code where you need it to achieve some efficiency, and at the
same time confine code propagation is in a known limited area. This means
extending the system's vocabulary on your own (or between you and friends).
As we already have seen, this is a good idea, but again, a base of vocabulary
however big, cannot replace a good grammar; culture can never replace
intelligence; it saves you a lot of work that's already been done by others
(but can be also be costly to acquire -- so don't always want to acquire any
unuseful culture), but won't speed up the remaining work (and as we saw,
there's always remaining work). If you want to dig a very long tunnel, unless
there's already one finished or almost done, you'd better look for efficient
digging machines than for the entrance of a formerly begun gallery.

  Then what are "intelligent" ways to produce reusable, easy to modify code ?
Such a method should allow reusing code without duplicating it, and without
growing it in a both unefficient and uncomplete way: an algorithm should be
written once and for once for all the possible applications it may have.
That's __genericity__.
  First, we see that the same algorithm can apply to arbitrarily complex data
structures; but a piece of code can only handle a finitely complex data
structure; thus to write code with full genericity, we need use code as
parameters, that is, __second order__. In a low-level language (like "C"),
this is done using function pointers.
  We soon see problems that arise from this method, and solutions for them.
The first one is that whenever we use some structure, we have to explicitly
give functions together with it to explain the various generic algorithm
how to handle it. Worse even, a function that doesn't need some access method
about an the structure may be asked to call other algorithms which will
turn to need know this access method; and which exact method it needs may not
be known in advance (because what algorithm will eventually be called is not
known, for instance, in an interactive program). That's why explicitly passing
the methods as parameters is slow, ugly, inefficient; moreover, that's code
propagation (you propagate the list of methods associated to the structure --
if the list changes, all the using code changes). Thus, you mustn't pass
*explicitly* those methods as parameters. You must pass them implicitly;
when using a structure, the actual data and the methods to use it are embedded
together. Such a structure including the data and methods to use it is
commonly called an *object*; the constant data part and the methods,
constitute the *prototype* of the object; objects are commonly grouped into
*classes* made of objects with common prototype and sharing common data.
*This* is the fundamental technique of /Object/-/Oriented/ programming; Well,
some call it that Abstract Data Types (ADTs) and say it's only part of the
"OO" paradygm, while others don't see anything more in "OO". But that's only
a question of dictionary convention. In this paper, I'll call it only ADT,
while "OO" will also include more things. But know that words are not settled
and that other authors may give the same names to different ideas and vice
versa.
   BTW, the same code-propagation argument explains why side-effects are an
especially useful thing as opposed to strictly functional programs (see pure
ML :); of course side effects complicate very much the semantics of
programming, to a point that ill use of side-effects can make a program
impossible to understand and/or debug -- that's what not to do, and such
possibility is the price to pay to prevent code propagation. Sharing *mutable*
data (data subject to side effects) between different embeddings (different
*users*) for instance is something whose semantics still have to be clearly
settled (see below about object sharing).

  The second problem with second order is that if we are to provide functions
other functions as parameter, we should have tools to produce such functions.
Methods can be created dynamically as well as "mere" data, which is all the
more frequent as a program needs user interaction. Thus, we need a way to
have functions not only as parameters, but also as result of other functions.
This is *Higher order*, and a language which can achieve this has a
*reflexive* semantics. Lisp and ML are such languages; FORTH also, whereas
standard FORTH memory management isn't conceived for a largely dynamic use of
such feature in a persistent environment. From "C" and such low-level
languages that don't allow a direct portable implementation of the
higher-order paradygm through the common function pointers (because low-level
code generation is not available as in FORTH), the only way to achieve
higher-order is to build an interpreter of a higher-order language such as
LISP or ML (usually much more restricted languages are actually interpreted,
because programmers don't have time to elaborate their own user customization
language, whereas users don't want to learn a new complicated language for
each different application and there is currently no standard user-friendly
small-scale higher-order language that everyone can adopt -- there are just
plenty of them, either very imperfect or too heavy to include in every
single application).
  With respect to typing, Higher-Order means the target universe of the
language is reflexive -- it can talk about itself.
  With respect to Objective terminology, Higher-Order consists in having
classes as objects, in turn being groupable in *meta-classes*. And we then see
that it _does_ prevent code duplication, even in cases where the code concerns
just one user as the user may want to consider concurrently two -- or more --
different instanciations of a same class (i.e. two *sub-users* may need toe
have distinct but mostly similar object classes). Higher-Order is somehow
allowing to be more than one computing environment: each function has its own
independant environment, which can in turn contain functions.
  As for those who despise higher-order and user-customizability, I shall
reply that there is *NO* frontier between using and programming. Programming
*is* using the computer while using a computer *is* programming it. The only
thing you get by having different languages and interfaces for "programmers"
and mere "users" is building plenty of inefficient languages, and stupefying
all computer users with a lot of unuseful ill-conceived, similar but different
ones. You also make development cycles longer by building hardly crossable
borders between language with different general functionalities, and prevent
much useful work from being done by users intelligent enough to write their
own modules, but who don't have time to write a complete application from
*scratch* (i.e. what is commonly provided to a computer user, including so
called professional quality commercial software), not even considering
making it suitable to exchange data with the other software they must use
("compatible").
  Some say that common users are too stupid to program; that's only despising
them; most of them don't have *time* and *mind* to learn all the subtleties of
advanced programming; but they often do manually emulate macros, and if shown
once how to do it, are very eager to write their own macros/aliases.
  Some fear that authorizing a "mere" user to use a powerful programming
language is the door open to piracy and system crash. Well, if the language
library has such security holes, it's a library misconception; if the language
doesn't allow the design of a secure library, that's a language misconception.
Whatever was misdesigned, it language should be redesigned, amended of
replaced (as should be "C"). If you don't want people to cross an ill-placed
fissured wall, you'd better rebuild a new better placed wall than hire an
army of guards to shoot at people trying to get into the other part of the
misplaced wall, or unwillingly trespassing in a crack -- and in the first
solution, don't forget to also hire people to cope with lacks due to the
wall's misplacement. And if what you want most is nobody trespassing, well,
just forbid people from ever nearing the wall -- don't have them use the
computer.
  The truth is any computer user, whether a programming guru or a novice
user, is somehow trying to communicate with the machine. The easier
communication, the quicker better larger work is getting done.

  To end with genericity, here is some material to feed your thoughts about
the need of system-builtin genericity: let's consider multiplexing.
  For instance, Unix (or worse, DOS) User/shell-level programs are ADTs,
but with only one exported operation, the "C" main() function per executable
file. As such "OS" are huge-grained, with ultra-heavy inter-executable-file
(even inter-same-executable-file-processes) communication semantics no one can
afford one executable per actual operation exported. Thus you'll group
operations into single executables whose main() function will multiplex those
functionalities.
  Also, communication channels are heavy to open, use, and maintain, so you
must explicitly pass all kind of different data&code into single channels by
manually multiplexing them (the same for having heavy multiple files or a
manually multiplexed huge file).
  But the system cannot provide builtin multiplexing code for each single
program that will need it. It does provide code for multiplexing the hardware,
memory, disks, serial, parallel and network lines, screen, sound. POSIX
requirements grow with things a compliant system oughta multiplex; new
multiplexing programs ever appear. So the system grows, while it will never
be enough for user demands as long as *all* possible multiplexing won't have
been programmed, and meanwhile applications will spend most of their time
manually multiplexing and demultiplexing objects not yet supported by the
system.
  Thus, any software development on common OSes is hugeware. Huge in hardware
resource needed (=memory - RAM or HD, CPU power, time, etc), huge in resource
spent, and what is the most important, huge in programming time.
  The problem is current OSes provide no genericity of services. Thus they can
never do the job for you. That why we really NEED *generic* system
multiplexing, and more generally genericity as part of the system. If one
generic multiplexer object was built, with two generic specializations
for serial channels or flat arrays and some options for real-time behaviour
and recovery strategy on failure, that would be enough for all the current
multiplexing work done everywhere.

  So this is for Full Genericity: Abstract Data Types and Higher Order.
Now, if this allows code reuse without code replication -- what we wanted --
it also raises new communication problems: if you reuse objects especially
objects designed far away in space and/or time (i.e. designed by other
people or an other, former, self), you must ensure that the reuse is
consistent, that an object can rely upon a used object's behaviour. This is
most dramatic if the used object (e.g. part of a library) comes to change
and a bug (that you could have been aware of -- a quirk -- and already have
modified your program accordingly) is removed or added. How to ensure object
combinations' consistency ?
  Current common "OO" languages are not doing much consistency checks. At
most, they include some more or less powerful kind of type checking (the most
powerful ones being those of well-typed functional languages like CAML or
SML), but you should know that even powerful, such type checking is not
yet secure. For example you may well expect a more precise behavior from
a comparison function on an ordered class 'a than just being 'a->'a->{LT,EQ,GT}
i.e. telling that when you compare two elements the result can be
"lesser than", "equal", or "greater than": you may want the comparison
function to be compatible with the fact of the class to be actually ordered,
that is x<y & y<z => x<z and such. Of course, a typechecking scheme, which
is more than useful in any case, is a deterministic decision system, and as
such cannot completely check arbitrary logical properties as expressed above
(see your nearest lectures in Logic or Computation Theory). That's why to add
such enhanced security, you must add non-deterministic behaviour to your
consistency checker and/or ask for human help. That's the price for 100%
secure object combining (but not 100% secure programming, as human error is
still possible in misexpressing the requirements for using an object, and
the non-deterministic behovior can require human-forced admission of unproved
consistency checks by the computer).
  This kind of consistency security by logical formal property of code is
called a formal specification method. The future of secure programming lies
in there (try enquire in the industry about the cost of testing and/or
debugging software that can endanger the company or even human lives if ill
written, and insurance funds spent to cover eventual failures -- you'll
understand). Life concerned industries already use such modular formal
specification techniques.
  In any cases, we see that even when such methods are not used automatically
by the computer system, the programmer has to use them manually, by including
the specification in comments and/or understanding the code, so he does
computer work.

  Now that you've settled the skeleton of your language's requirements, you
can think about peripheral deduced problems.





------------------------------------------------------------------------------
[draft]
- not building an artificial border between programmers and users --> not
only the system programming *language* must be OO, but the whole *system*.
- easy user extensibility --> language-level reflexivity.
- sharing mutable data: how ? --> specifications & explicitly mutable/immutable
(or more or less mutation-prone ?) & time & locking -- transactions.
- objects that *must* be shared: all the hardware resources -- disks & al.
- sharing accross time --> persistence
- reaching precision/mem/speed/resource limit: what to do ? --> exceptions
- recovering from exceptional situations: how ? --> continuations (easy if
 higher-order on)
- tools to search into a library --> must understand all kind of morphism in
a logically specified structure.
- sharing accross network -->
 - almost the same: tools for merging code --> that's tricky. Very important
for networks or even data distributed on removable memory (aka floppies) --
each object should have its own merging/recovery method.
- more generally tools for having side effects on the code.




* Structures:
-------------
we consider Logical Structures: each structure contains some types, and
symbols for typed constants, relations, and functions between those types.
Then we know some algebraic properties verified by those objects,
i.e. a structure of typed objects, with a set of constants&functions&relations
symbols, et al.

  A structure A is interpreted in another structure B if you can map the
symbols of A with combinations of symbols of B (with all the properties
conserved). The simplest way to be interpreted is to be included.
  A structure A is a specialization of a structure B if it has the same
symbols, but you know more properties about the represented objects.

* Mutable objects:
------------------
We consider the structure of all the possible states for the object. The
actual state is a specialization of the structure. The changing states
accross time constitute a stream of states.

* Sharing Data
--------------
  The problem is: what to do if someone modifies an object that others see ?
Well, it depends on the object. An object to be shared must have been
programmed with special care.
  The simplest case is when the object is atomic, and can be read or modified
atomically. At one time, the state is well defined, and what this state is
what other sharers see.
  When the object is a rigid structure of atomic objects, well, we assume that
you can lock parts of the object that must be changed together -- in the
meantime, the object is unaccessible or only readable -- and when the
modification is done, everyone can access the object as before. That's
transactions.
  Now, what to do when the object is a very long file (say text), that each
user sees a small part of it (say a full screen of text), and that someone
somewhere adds or deletes some records (say a sentence) ? Will each user's
screen scroll according to the number of records deleted ? Or will they
stay at the same spot ? The later behaviour seem more natural. Thus, a file
has this behaviour that whenever a modification is done, all pointers to the
file must change. But consider a file shared by _all_ the users across a
network. Now, a little modification by someone somewhere will affect
everyone ! That's why both the semantics and implementation of shared objects
should be thought about longly before they are settled.

Problem: recovery
-----------------
What to do when assumptions are broken by higher priority objects ?
e.g. when the user interrupts a real-time process, when he forces a
modification in a otherwise locked file, when the process is out of memory,
etc.
Imagine a real-time process is interrupted: will it continue where is stopped ?
or will it skip what was done during the interruption ?
Imagine the system runs out of memory ? Whose memory are you to reclaim back ?
To the biggest process ? The smallest ? The oldest ? The first to ask for
more ? If objects spawn, thus filling memory (or CPU), how to detect "the one"
responsible and destroy it ?
If an object locks a common resource, and then is itself blocked by a failure
or other unwilling latency, should this transaction be cancelled, so others can
access the resource, or should all the system wait for that single transaction
to end ?

  As for implementation methods, you should always be aware that defining
all those abstraction as the abstractions they are rather than hand-coded
emulation for these allows better optimizations by the compiler, quicker
write phase for the programmer, neater semantics for the reader/reuser,
no implementation code propagation, etc.
  Partial evaluation should also allow specialization of code that don't use
all the language's powerful semantics, so that standalone code be produced
without including the full range of heavy reflexive tools.


------------------------------------------------------------------------------
Summary:
========

* Axioms:
--------
= "No man should do what the computer can do quicker for him (including time
spent to have the computer understand what to do)" -- that's why we need to
be able to give order to the computer, i.e. to program.
= "Do not redo what others already did when you've got more important work" --
that's why we need code reuse.
= "no uncontrolled code propagation" -- that's why we need genericity.
= "security is a must when large systems are being designed" -- that's why we
need strong typechecking and more.
= "no artificial border between programming and using" -- that's why the entire
system should be OO with a unified language system, not just a hidden system
layer.
= "no computer user is an island, entire by itself" -- you'll always have to
connect (through cables, floppies or CD-ROMs or whatever) to external
networks, so the system must be open to external modifications, updates and
such.


  That is, without ADTs, and combinating ADTs, you spend most of your time
manually multiplexing. Without semantic reflexivity (higher order), you spend
most of your time manually interpreting runtime generated code or manually
compiling higher order code. Without logical specification, you spend most of
your time manually verifying. Without language reflexivity, you spend most of
your time building user interfaces. Without small grain, you spend most of
your time emulating simple objects with complex ones. Without persistence,
you spend most of your time writing disk I/O (or worse, net I/O) routines.
Without transactions, you spend most of your time locking files. Without
code generation from constraints, you spend most of your time writing
redundant functions that could have been deduced from the constraints.
  To conclude, there are essentially two things we fight: lack of feature
and power from software, and artificial barriers that misdesign of former
software build between computer objects and others, computer objects and
human beings, and human beings and other human beings.


To conclude, I'll say