> Given these deep divisions over the essential nature of the Scheme language, does it even make sense that we still keep making a Scheme report?
> ‘No’ is an entirely possible answer to this question. Already in the R6RS and R7RS small days, people were arguing that Scheme standardization should stop.
> If we went this way then, just like Racket in its default mode no longer claims to be a Scheme report implementation, Schemes would slowly diverge into different languages. Guile Scheme would one day simply be Guile; Chicken Scheme would be Chicken, and so on. Like the many descendants of Algol 60 and 68, and the many dialects of those descendants, each of these languages would have a strongly recognizable common ancestor, but each would still be distinct and, ultimately, likely incompatible.
This would doom all of those variants to irrelevance even more than they already are.
To the degree that people want Scheme to be a useful language for writing programs that solve real-world problems, it needs an ecosystem. And in order to compete with other languages, that ecosystem needs to commensurate with the scale that those other languages have. Otherwise, it doesn't matter how elegant the syntax is or how powerful the macro system is. If user needs to talk to a database and there isn't a good database library, they aren't going to pick the language.
The Scheme ecosystem is already tiny when you lump all languages and their packages into one. Fragment that, and you're probably below viability for all of them.
Now, it is fine if the goal of Scheme is not writing programs to solve real-world problems. It may be just a teaching language. But the evidence seems to be that it's hard to motivate programming students to learn a language that they ultimately won't end up using.
If you have immutable data and want code reuse, you need laziness otherwise you give up a lot of performance. In Haskell, a careful choice of implementation for `sort` means that `take 10 (sort someList)` will only do enough work to return the first ten elements.
Having laziness by default means that functions compose properly by default; you don't have to worry about libraries providing an interface to your chosen incremental streaming library or whatever. I've seen friends working in strict dialects of Haskell forced to write out each combination of list functions by hand because otherwise they'd have to materialise large intermediate data structures where regular lazy Haskell simply wouldn't.
Ed Kmett has a couple of great posts about the value he's realised from laziness:
What works for Kmett most often does not work for mere mortals.
As for your first point, I think it's self-defeating: You claim
"you don't have to worry about libraries providing an interface to your chosen incremental streaming library", but this requires "a careful choice of implementation" in those libraries with your chosen incremental streaming semantics in mind, which is the same thing but less explicit! And as long as mere mortals can't figure out the magic implementation of `sort` which makes incremental streaming work without explicit bindings, then what's the point?
Haskell is a great language for consuming libraries written by Ed Kmett, as your link demonstrates. Otherwise, it's difficult to work with.
I had Ed's posts on hand, but I don't think that's true. It works at mortal levels too. If you write `map f (map g someList)`, laziness means you'll materialise at most one cons of the intermediate list (although GHC will probably fuse it outright). A strict language would materialise the entire intermediate list, and mortal developers would have to know to write `map (f . g) someList` to avoid that.
This all assumes that you're willing to buy into a language that does immutable data structures by default, and are willing to rely on the work of people like Okasaki who had to work out how to do performant purely functional data structures. If you're willing to admit more mutability (I'm not), then you sit at different points in the design space.
You don’t have to worry about the order of evaluation. It doesn’t seem like a big deal, until you have a big messy problem with data-dependent interdependency. Also, every list is a generator, and every indexable container is an instance of “dynamic programming”. So you just declare how values are computed from other values, and let the runtime take care of ordering things, and it just works. It’s what makes a functional programming language into a declarative one, in practice.
I wish I saw what these guys do in scheme. I only barely know what is happening and it seems interesting.
The parens are so hard for me to follow and always have. I have yet to find an editor that fixes that. Perhaps I did not try enough or am not smart enough to acutally use the editors correctly.
This is a good starter. Technically, it's Lisp and not Scheme, but once you understand one, you get the other. The benefit of Emacs Lisp is you can immediately play with it by modifying Emacs to meet your needs.
Syntax is easy. Practical semantics is a little bit harder, but it's not hard.
Editor-wise, you want an editor that does automatic indenting and some kind of matching parentheses highlighting. Emacs is one. (Once you've learned the language, you can use a fancy structural editor, but maybe don't confuse yourself with too many new things at once.)
What I found weird in Lisp (and didn't even realize at first) is that
foo
and
(foo)
mean something different.
I now understand it similarly to the way in set theory x and {x} are different, but one is not used to the ordinary parenthesis symbol behaving in this way.
Working through all the exercises in "The Little Schemer" was a huge help for me when getting started. You start with a few primitives and build up all common tools from those with recursion, like how to build an addition function using just `add1` as an early example from the book.
Interesting point about the difficulty of parsing all those parentheses! I remember getting pretty frustrated with it when I first picked up Scheme. It felt like trying to read a book written in a strange code. But then I stumbled onto paredit in Emacs—it totally transformed the way I interacted with the code. The structured editing made it feel more like composing music than wrestling with syntax.
And you're right—working through "The Little Schemer" was a game-changer for me too. There's something about gradually building up to complex concepts that really clicks, right? I wonder if there could be a way to create more beginner-friendly editors that visually guide you through the syntax while you code. Or even some sort of interactive tutorial embedded in the editor that helps by showing expected patterns in real-time.
The tension between users wanting features and implementers wanting simplicity is so prevalent in so many languages, isn't it? Makes me think about how important community feedback is in shaping a language's evolution. What do you all think would be a good compromise for Scheme—more features or a leaner report?
The "such parens, much overwhelm, so confuse" attitude of non-Lispers always baffled me. Especially since when working in C-syntax languages, I'm cautious enough to enforce an explicit order of operations (to avoid confusion that can lead to errors) that I put nearly as many parens in my C or Java code as I do in my Lisp code. What's a few more pairs of round brackets among friends, eh?
Emacs was purpose-built for working in Lisp. Out-of-the-box it really helps with paren-matching by highlighting the matched bracket (of any type) when you cursor over a bracket (also works by highlighting the open when you type the close) and providing commands for traversing and selecting whole sexps. Those alone, combined with its smart indentation, will get you pretty far. Add something like Paredit or Parinfer if you want even more assistance with sexp manipulation.
I had a talk with someone very much allergic to lisp (a college trauma for him iirc). For people like him, extra distinction through syntax is a mental benefit while I assume lisp fans need the opposite, removing 90% of syntax makes things easier (sexps and fp composability being key too)
> ‘No’ is an entirely possible answer to this question. Already in the R6RS and R7RS small days, people were arguing that Scheme standardization should stop.
> If we went this way then, just like Racket in its default mode no longer claims to be a Scheme report implementation, Schemes would slowly diverge into different languages. Guile Scheme would one day simply be Guile; Chicken Scheme would be Chicken, and so on. Like the many descendants of Algol 60 and 68, and the many dialects of those descendants, each of these languages would have a strongly recognizable common ancestor, but each would still be distinct and, ultimately, likely incompatible.
This would doom all of those variants to irrelevance even more than they already are.
To the degree that people want Scheme to be a useful language for writing programs that solve real-world problems, it needs an ecosystem. And in order to compete with other languages, that ecosystem needs to commensurate with the scale that those other languages have. Otherwise, it doesn't matter how elegant the syntax is or how powerful the macro system is. If user needs to talk to a database and there isn't a good database library, they aren't going to pick the language.
The Scheme ecosystem is already tiny when you lump all languages and their packages into one. Fragment that, and you're probably below viability for all of them.
Now, it is fine if the goal of Scheme is not writing programs to solve real-world problems. It may be just a teaching language. But the evidence seems to be that it's hard to motivate programming students to learn a language that they ultimately won't end up using.
reply