?

Log in

More dynamic - Luke's Weblog [entries|archive|friends|userinfo]
Luke Gorrie

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

More dynamic [Oct. 22nd, 2008|10:07 pm]
Luke Gorrie
[Tags|, , ]

Just a couple of recent thoughts..

Tools shape thought. Many at Smalltalk Superpowers pointed out that adding static tools to dynamic languages discourages people from writing very dynamic programs. We think twice before using EVAL, APPLY, perform:, messageNotUnderstood:, because our tools are confused by them -- debuggers, cross-referencers, refactorers, type checkers, etc. We're more inhibited in languages with sophisticated tools (Scheme, Common Lisp, Smalltalk) than more basic ones (Emacs Lisp, Javascript, Ruby). Example: I won't use SCREAMER because I know how ugly it would be to use SLIME for debugging code that was CPS-converted by macros. These tools are a mixed blessing.

Dynamic is cool. Smalltalk and Forth execute by direct operations on simple first-class objects (program counter, stack, etc). You can dynamically manipulate those objects in simple ways to do useful things: call/cc (copy a sequence activation records), tail-call optimization (drop one record), exceptions (drop them until you find a handler), for example. This really feels easy compared with program transformation approaches like CPS-conversion. Compilers are hard but stacks and program counters are easy.

I wonder what the most late-bound programming environment is today? Please tell me if you know.

LinkReply

Comments:
From: (Anonymous)
2008-10-23 07:18 am (UTC)

io?

Haven't used it, but I think Io (http://www.iolanguage.com/) is supposed to be pretty high up there.
(Reply) (Thread)
[User Picture]From: lukego
2008-10-23 10:08 pm (UTC)

Re: io?

Thanks for this pointer! Io looks fascinating. I'm waiting for 'make port' to finish now :-)

Here's the best jumping off point that I found: http://www.iolanguage.com/scm/git/checkout/Io/docs/IoGuide.html#Syntax-Expressions
(Reply) (Parent) (Thread)
[User Picture]From: vatine
2008-10-23 08:36 am (UTC)
Hm, once I am approximately sure how a macro layer works, the way that it makes for ugly debugging is actually not TOO much of an issue for me.

But, then, with rapid redefinitions and kept state from an interactive dev environment, I frequently find that carefully inserted printing of intermediates is quite powerful as a debugging technique.
(Reply) (Thread)
[User Picture]From: graydon
2008-10-23 02:04 pm (UTC)
First, you're absolutely right about the "superpower" expressiveness of the more dynamic and simple low-level designs. But second, I do not feel that the world needs more expressive programming languages. We've been at (or near) the limit of expressiveness since, as you say, forth, lisp, smalltalk etc. They are good for two contexts: extremely tasteful and skilled masters who do not wish to be constrained and have the luxury to be working on small, intrinsically manageable systems; and newcomers who need to be tempted with superpowers in order to learn to code and stretch their minds.

But for the rest of us -- the bulk of wake-up-each-day-to-write-not-completely-surprising-code programmers -- I think the need is for programs that are constrained, manageable, limited, predictable, analyzable and robust. So I favour new language designs that think hard about how to achieve this with static structures precisely because they inhibit expressiveness. Or rather: they focus on the expressiveness of checking and prohibition mechanisms, rather than on active "doing stuff" code. Saying what you want a program to do is important, but saying what you don't want a program to do is often more important. Especially given the economics: writing code is about 500x cheaper per unit than debugging it once it's running. So it's quite sensible to trade "doing stuff" expressiveness for "predictability and control" expressiveness. Java took steps in this direction -- its tooling is really exceptional these days -- but didn't go nearly far enough. It's still very hard to control the time and space behavior of a java program, and its type system is simultaneously inconvenient and overwrought in a couple dimensions and simply underpowered in a couple others.
(Reply) (Thread)
[User Picture]From: lukego
2008-10-23 04:15 pm (UTC)
How about strong dynamic invariants? Like the way we can make random pointer access more manageable by restricting it to specific areas of memory (e.g. with an MMU), we can control data consistency with transaction semantics, and so on. These are the things I'm currently enthusiastic about.
(Reply) (Parent) (Thread)
[User Picture]From: graydon
2008-10-23 04:35 pm (UTC)
Yes. In practice I believe some "tasteful blend" of dynamic and static invariants helps more than a religious dedication to either. There is definitely a tension between making elaborate encodings of safety in a static type system -- perhaps one so hard to use that nobody winds up using it -- and just leaving the more elaborate bits to simple, obvious dynamic checks. It's all the worse when you are religious about static-ness without providing a dynamic checking system where the static system inevitably runs out of steam. Then you wind up with something like C, where you can punch out of the static "type system" and then return to it under false pretenses via a cast. What's the point? It'd be way better to make a void* a fat pointer that carries a type information record around with it, and checks it when you try to cast it back into a monomorphic C type. But then, it'd also be nice if C arrays knew how long they were. Sigh.

But also keep in mind that "misbehavior" is only half a matter of what a program does when run: it's also half a matter of what you expected it to do. So if the text of the program is so opaque with dynamic magic that you cannot tell what you can expect it to do, its misbehavior is practically guaranteed. Even if it satisfies all of its dynamic checks at runtime. Static structure often helps the reader figure out what to expect.

Put another way: language mediates two directions of communication: programmer-to-computer and programmer-to-other-programmer. Dynamic invariants help with the former, but they only help with the latter insofar as the person reading the code can deduce which invariants will be dynamically enforced in which contexts. An example failing here is lisp s-expressions, which decidedly don't always mean "beta reduce in applicative order". Their meaning is so sensitive to the syntactic context (and that itself is extensible via rather powerful macros), that you often have to expand it out and/or break in a debugger to figure out how the components of a given sexp are eventually going to be interpreted. So the static / textual apparant-ness of the meaning and guarantees made by a piece of code are, in practice, quite important too.

Hopefully that's not incoherent!
(Reply) (Parent) (Thread)
[User Picture]From: darius
2008-10-23 06:23 pm (UTC)
I once toyed with a Lisp dialect distinguishing calls from syntax and macros -- like {define {f n} {if (= n 0) 1 (* n (f (- n 1)))}} or maybe (define (f n) (if {= n 0} 1 {* n {f {- n 1}}})) -- but it just feels ugly, sigh.

There are some nice examples of dynamic invariants communicated to other programmers as well as the computer: Findler & Krishnamurthy's contracts and E's guards. (I'm not arguing with your principles, which I agree with.)
(Reply) (Parent) (Thread)
[User Picture]From: graydon
2008-10-23 06:49 pm (UTC)
There are a bunch of such lisps, yeah. It's common to throw [] in to the mix too. Might as well. But it does seem to kill the smooth lines of lisp's aesthetics!

I find the contract work quite impressive in the sense that it goes to great lengths to make sure the correct person is blamed, even in tricky higher-order code. I'm less impressed with the frequency of re-running the checks. I'd like dynamic checks to act more as a "boundary" between dynamic state and static proven properties, if they're to be efficient. But then, there's not much in the way of static properties to fall back into, in a scheme program.

(Curiously: the tracing jit I'm presently working on synthesizes types and any number of additional "interesting" dynamic invariants of a trace at runtime, by observation, and then switches between specialized copies of a trace based on which assumed conditions currently hold. This has been somewhat eye-opening, though the feature is not exposed to the source language, so it's mostly limited to an efficiency trick, not a correctness one)
(Reply) (Parent) (Thread)
[User Picture]From: lukego
2008-10-23 07:03 pm (UTC)
Incidentally I just heard a talk about that JIT (assuming it's Tracemonkey) at OOPSLA. Very interesting!
(Reply) (Parent) (Thread)
[User Picture]From: graydon
2008-10-23 07:06 pm (UTC)
Really? I had no idea anyone was presenting on it. Who?
(Reply) (Parent) (Thread)
[User Picture]From: lukego
2008-10-23 07:12 pm (UTC)
Andreas Gal. Nice bloke, I had lunch with him. T'was this workshop: http://www.cs.iastate.edu/~design/vmil/
(Reply) (Parent) (Thread)
[User Picture]From: graydon
2008-10-23 07:18 pm (UTC)
Oh, yeah, I'm sitting in an IRC channel working with Andreas presently. I had no idea he was presenting at OOPSLA. He's so productive I suppose I wouldn't notice. Probably committing patches during the talk.
(Reply) (Parent) (Thread)
[User Picture]From: graydon
2008-10-23 04:42 pm (UTC)
I also meant to add: an illuminating counterpart to this issue exists outside language design. Hardware designers are also painfully aware of an economic gap between the cost of creation and the cost of debugging and fixing faults in the field. They too face two styles of approaching the "making correct hardware" problem: elaborate proof assistants, and model checkers. The model checkers just use brute force and ignorance, but they are easier to use, easier to explain, more flexible, and in practice I think wind up being used much more (at great computational expense).

(Also in debugging tools: there are complex static analyzers like coverity that try to figure out program wrongness symbolically, and then there are brute force tools like valgrind that just track every bit in the system while it's running and show you the party to blame when there's a fault. The latter, perhaps sadly, are often easier to get useful results out of than the former)
(Reply) (Parent) (Thread)
From: (Anonymous)
2008-10-23 05:46 pm (UTC)

Late-bound programming environment

Have you tried Objective-C ? The Cocoa environment to develop Mac programs is probably production environment that I know of.

Here's the info from Apple's site:
http://developer.apple.com/documentation/Cocoa/Conceptual/OOP_ObjC/Articles/chapter_5_section_6.html

Pradeep
(Reply) (Thread)
[User Picture]From: gwozniak
2008-10-24 10:57 am (UTC)
I've always like Self for dynamism and late-binding.
(Reply) (Thread)
From: ext_129989
2008-10-25 05:48 pm (UTC)

On a more dynamic Lisp

Luke, this is exactly what I tried to convey over dinner at OOPSLA. The main problem I have with Common Lisp is that the spec is defined semantically, i.e. with pages of text, instead of mechanically in Lisp itself. This has left it not very reflective nor extensible. The fact that you can't get a handle on the stack in an implementation independent way is an example of the problem. This is what makes Scheme attractive. It is defined in itself, and therefore redefinable in itself.

Eric
(Reply) (Thread)
[User Picture]From: lukego
2008-10-27 03:11 pm (UTC)

Re: On a more dynamic Lisp

Hi Eric!

I think you give Scheme too much credit :-) to me R5RS seems less reflective and no more extensible than standard Common Lisp.
(Reply) (Parent) (Thread)