There was lots of interesting stuff in your reply to the "Guile is
Good for You FAQ".
On a few points, i think there was misunderstanding of what i tried
to say and this is just to clear those up.
For one thing you write:
Your attitude here seems to be that you won't seriously consider
anything that departs at all from your current implementation
drive. E.g., for RScheme to be considered, we'd have to clone
everything you've done, which we don't even have a manual for yet,
much less a clear spec.
That isn't quite my attitude!
I'm not trying to make it harder to use RScheme (or any compiler) for
Guile -- I'm trying to make it easier.
It is much simpler to just agree on a much smaller spec than the whole
library: perhaps a VM interface and some calling conventions. And you
pointed out the possibility of adapting the upcoming RScheme GC --
that sounds good too!
And you ask:
Do you have a plan for stabilizing the language so that others can
I think it is just too early. Standardization and stabilization of a
Scheme language is a well-explored problem -- R4RS and IEEE exist.
This is more a phase of exploring the design space and finding out
what does well in the context of an extension language.
We recently improved our module system, and
I'm interested to know whether we made it more
Guile-compatible or less.
There is (now) at least beginning documentation for it. Guile modules
are very flexible and i hope that any other module system can be
implemented and integrated using the facilities in Guile. Choice of
module-system abstraction shouldn't be a sticking point.
>deliberately has all sorts of yummy dynamic behaviors that would be
>difficult and pointless to try to get right in a compiler (examples:
>lazilly computed top-levels, Guile's low-level macro system, and
I don't see why these things conflict with having a compiler-oriented
I think i just put it poorly.
Some compilers, call them eval-oriented, are optimized for
running "eval" on-the-fly.
Some compilers, call them batch-oriented, are optimized for
doing static analysis of programs and generating snappy object
To an extent, you can fake one with the other. You can add
optimizations or pre-processors to an eval-oriented compiler that
mimic some optimizations that might be seen in a batch-oriented
compiler. You can make sure your batch-oriented compiler has a very
fast mode in which few optimizations are performed, but code is
I'm not sure, though, how well an eval-oriented compiler can fake
being a batch-oriented one or vice versa. They each seem to benefit
from different code representations (bytecodes or machine instructions
vs. s-expressions), environment representations, and so forth. S-exp
based evaluation presents interesting if ad-hoc opportunities for
writing self-modifying code without getting too tangled up in
low-level VM details.
Given these differences, why bother trying to fake eval-orientation
with a batch-oriented compiler (or vice versa)? The eval-oriented
compiler and VM in Guile is small by most standards (today's .o file
is 41K of sharable program text on a 486). Why not regard it as a
handy part of the run-time system; one that allows the batch compiler
to punt on being the implementation of run-time eval?
>Guile also deliberately has an execution model that is good for some
>kinds of compiler (like hobbit and, i suspect, Rscheme's) and bad for
>others (from what i read, the calling conventions would just be an
>obstacle for, say, twobit).
I'm not sure what the issues are here. Could you be more specific,
e.g., what twobit likes that Guile doesn't do?
Guile likes C calling conventions and i recall that twobit prefers its
own. (I'm not a twobit expert, my recollection could be off.)
>Also, among the cons pair handles, the freelist is always sorted so
>allocations of cons pairs tend to cluster. (Not that this has been
>From some measurements of allocator stuff we've done lately---and
also from some old measurements of reference counted and mark-sweep
collectors---I strongly suspect that this isn't worth the cost.
(I'd be interested to hear if it *is* worth the cost, though.)
I think there is a mis-understanding.
The freelist is always *in a sorted state*. There is never any need
to *sort* the freelist, so there is no cost.
It is just a free side-effect of mark-sweep collection that
the freelist is sorted.
>From our point of view, this is considerably less attractive. Compiler
development is serious business. If our stuff is put in a subsidiary
position (a subset compiler), it's not much of a "market" for us.
I think subsets are very good things.
For example, we could specify a subset of Guile that is basicly R4RS
(but with ()==#f). Using the Guile package system, we could even
create top-level environments that are restricted to that subset. In
such dialects people could (and already do) write such things as
generic, performance-critical data structure code. It would be an
added bonus if programs written in such dialects could be specially
compiled into high-quality code.
The resulting system, with its absurdly extensible
interpretive-environments along-side austere, formally standardized
high-performance environments would give programmers a complete set of
tools and a realistic set of trade-off choices.
Anyway, given a subset compiler, i'd bet it is almost always
straightforward to write a Scheme->Scheme translator that translates
all of Guile into that subset. Then the significance of "subset" in
"subset compiler" becomes "what subset of the language is compiled to
really good code"?
>It is true that Guile has some features that no compiler could hope to
>recover from if it tried to deal with them. Lazy top levels and
>low-level macros are two examples.
Could you explain the problems here?
The bindings within lazy top levels aren't knowable at compile-time.
So, you don't know what is syntax and what isn't.
Low level macros are called on-the-fly and return s-expressions.
Those features like an eval-oriented compiler.