Show HN: A Common Lisp implementation in development, supports ASDF
savannah.nongnu.orgImplementation of the standard is still not complete, but breakpoints and stepping work quite well! It also has some support for watchpoints, that no implementation has.
Now it ships with ASDF and is capable of loading systems!
Let me know if you like it. Support on Patreon or Liberapay is much appreciated
At what point does a CL implementation need to be before it can start hoovering up the available library code from other implementations (license permitting).
How many LOOP macros does the community need, particularly when bootstrapping an implementation, as an example.
Similarly with, arguably 70-80% of the runtime. The CL spec is dominated by the large library which, ideally, should be mostly portable CL, at least I would think.
You're not the first one to think so: https://github.com/robert-strandh/SICL
I'm unsure how complete it is, but it seems to cover much of the standard.
Here's a recently-written summary of the different libraries in SICL (including each library's purpose and status) http://metamodular.com/SICL-related-libraries/sicl-related-l...
a hypothetical portable layer exists, but it starts diverging once deployed, because of cleanups, refactoring, or implementation specific hacks.
LOOP is a great example, because all loop is just MIT LOOP version 829, originally cleaned up by burke. but nobody can resist deploying their personal architectural touch, so while the basic framework of loop remains identical across impelementations, there's superficial refactoring done by pretty much everyone. if you take SBCL and franz lisp as state of the art in free software and commercial respectivaly, they have equally solid improvements on original loop, that actually produce incompatible behavior in underspecified corners of spec. respective developer communities are very defensive about their incompatible behavior being the correct behavior of course. beach's SICL from sibling comment is the xkcd joke about standards "20 standards? we need a new standard the unifies them all! -- now we have 21 standards"
LOOP in this case is a very simple example, but for example CLOS was originally implemented on top of PCL, Portable CommonLoops, an interlisp system, that was massaged into being compliant CLOS over years. for example sbcl uses a ship of theseus PCL, but franz lisp did from scratch rewrite. the hypothetical portability of that layer is significantly trickier than LOOP since clos is is deeply tied to the type system, and the boundary between some hypothetical base system common lisp and its clos layer becomes complicated during system bootstrapping. but that's not all! of course clos has to be deeply tied to the compiler, the type system, all kinds of things, to provide optimizations. discovering the appropriate slicing boundary is difficult to say the least.
Hey I'm curious as why you chose nongnu to host your project instead of github/gitlab! I don't know much about it, hence my curiosity ;)
I don't like sites with heavy Javascript, especially if it's non-free. (Though recently I started using Github for a different project.)
Savannah is very basic, perhaps too much, but it's okay for my project.
I hosted the TXR git on nongnu first, starting at around late 2009 or early 2010 maybe?
I abandoned that when I discovered there's no control. I seem to recall having to wait like over a week for someone to enable non-fast-forward pushes. Overly strict and understaffed. I opted for self hosting.
I kept the project web page there, though.
Tbh, this is the first time I see nongnu.org used for something other than Emacs packages (I know that’s on me), so much so that I even thought this was a solution to substitute Emacs Lisp with Common Lisp. :O
+1
> a debugger with stepping, a feature that most free CL implementations lack.
I think most free CL implementations have a stepper. Which ones do not?
I tried stepping in various free implementations, but I couldn't really follow the source forms and execute them one by one. Also, I couldn't find much information online. Maybe your experience is different?
I haven’t used CL recently so I can’t speak from experience. But it looks like:
CMU CL, SBCL, and LispWorks have steppers.
Clozure does not. (Edit: an answer on https://stackoverflow.com/questions/37754935/what-are-effici... suggests it does...)
As I understand it, those are the big 4.
Clisp, ABCL, and Allegro also appear to have steppers.
Always cool to see a new implementation, though!
In most of those implementations (certainly in SBCL) it's either you break or step; you can't start stepping from a breakpoint. SBCL got some support for that this year, see https://news.ycombinator.com/item?id=43791709. It, however, doesn't allow stepping into any functions called after the break.
Also, the compilers are allowed to make the code unsteppable in some cases, depending on optimization declaration: generally, debug needs to be >=2 and > speed/compilation-speed/space. In some circumstances, you land in decompiled/macroexpanded code, which is also quite unhelpful.
Anyway, it's not that source-level stepping isn't there at all, it's just quirky and somewhat inconvenient. A fresh implementation that does comparatively little optimization and is byte-code based can probably support debuggers better. I hope such support won't go away later when the native code compiler is implemented.
Thanks!
If I recall correctly, there are macros to control the level of code optimization? And some implementations can turn it off entirely for interactive use?
Or am I off-base?
> If I recall correctly, there are macros to control the level of code optimization?
Yup, you can either `(proclaim (optimize (debug 3) (speed 1)))` somewhere, which will take effect globally, or you can `(declare (optimize ...))` inside a particular function. It sounds great in theory - and it is great, in some respects - but this granularity makes it harder to ensure all interesting code is steppable when you need it.
Congratulations! Always good to see another Lisp in the world.
Have you thought about writing up your experience?
Also, my Patreon page (https://www.patreon.com/andreamonaco) has behind-the-scenes posts, some even in the free tier
Thanks! Maybe I could do that, if I see that people are interested
Does alisp plan to eventually support full compilation to native code, or will it mainly stay an interpreter with limited compilation?
Yeah, the goal is first bytecode compilation and then full
Do you have a goal in mind for this project?
Ideally I'd reach ANSI compliance, first with a bytecode compiler and then with a full one
Is there some important shortcoming of all the existing Common Lisp implementations that you would like to correct?
Awaiting answers. Seems stepping is one.
Btw, I stick to sbcl as I used vim and so far the script here works for me. Might try this when back to do lisp.
https://susam.net/lisp-in-vim.html
Yeah, advanced debugging features like watchpoints are very important to me
What is ASDF?
ASDF - another system definition facility - is the de facto standard build system for common lisp.
https://asdf.common-lisp.dev/
In common lisp, you don't need a build system at all; you can `(load "file.lisp")` everything and it should generally just work. But of course, build systems are useful tools, so nonetheless ASDF exists and it's nice enough to the degree that nobody has built a better and more widespread common lisp build system.
Some good trivial examples are in the lisp cookbook:
https://lispcookbook.github.io/cl-cookbook/systems.html
No idea why you're being downvoted for asking a simple question about an acronym. From Wikipedia [1]:
> ASDF (Another System Definition Facility) is a package format and a build tool for Common Lisp libraries. It is analogous to tools such as Make and Ant.
Contemporary developers using more mainstream languages are likely more familiar with asdf [2], the "Multiple Runtime Version Manager".
[1] https://en.wikipedia.org/wiki/Another_System_Definition_Faci...
[2] https://asdf-vm.com/
[flagged]
I'm not sure I understand. What I do in my project is very common practice: generated files (like the configure script) are not part of the repository, but they are put in released tarballs
It's a bad practice commonly found in GNU projects, which results in an overcomplicated, inconvenient and unstable build system that will discourage future collaboration. Many of these projects are very old, two decades or more; they are living with ancient decisions made in a world of dozens of incompatible Unix forks.
One thing to do instead is to just write a ./configure script which detects what you need. In other words, be compatible at the invocation level. Make sure this is checked into the repo Anyone checking out any commit runs that, and that's it.
Someone who makes a tarball using git, out of a tagged release commit, should have a "release tarball".
A recent HN submission shows a ./configure system made from scratch using makefiles, which parallelizes the tests. That could be a good starting point for a C on Linux project today.
> A recent HN submission shows a ./configure system made from scratch using makefiles, which parallelizes the tests. That could be a good starting point for a C on Linux project today.
Not everything is C, or GNU/Linux. The example also misses much of the basic functionality that makes GNU autotools amazing.
The major benefit of GNU autotools is that it works well, specially for new platforms and cross compilation. If all you care about is your own system, a simple Makefile will do just fine. And with GNU autotools you can also pick to just use GNU autoconf .. or just GNU automake.
Having generated files in the release tarball is a good practise, why should users have to install a bunch of extra tools just to get PDF of the manual or other non-system specific files? It is not just build scripts all over the place, installing TeX Live just to get a PDF manual of something is super annoying.
Writing your own ./configure that works remotely as something users would expect is non-trivial, and complicated -- we did that 30 years ago before GNU autoconf. There is a reason why we stopped doing that ...
I'd go so far to think that GNU autotools is the most sensible build system out there...
> Not everything is C,
AutoTools are squarely oriented toward C, though.
If you're not using C or C++, you're probably not using AutoTools.
(I think I might have seen someone's purely Python or shell script project using Autoconf, but that was just ridiculously unnecessary.)
> Having generated files in the release tarball is a good practise
Without a doubt, it is a good idea to ship certain generated files in a source code distribution. For instance, if we ship a generated y.tab.c file, the user doesn't have to have a Yacc program (let alone the exact version of the one we would like them to use).
What's not good practice is that anything is different in the release tarball compared to the git commit it was made from.
"Release tarball" itself a configuration management antipattern. We are a good two decades past tarballs now. A tarball should only be a convenience for people who don't need a git clone.
Every generated thing that a downstream user might need should be in version control, and updated whenever its prerequisites change. This is a special exception to the general rule that only primary objects should be in version control. Secondary objects for which downstream users don't have generation tools should also be in version control.
This is true and standard for (really) old projects, and dealing with this scripts and their problems used to be the bane of my existence 10 years ago. But I can't say I've encountered any such projects in the last 5 or so years.
Either they use a modern programming language (which typically has an included build system, like rust's cargo or simply go build) of they use simple Makefiles. For C/C++ codebases, it seems like CMake has become the dominant build system.
All of these are typically better than what GNU autoconf offers, with modern modern features and equally or better flexibility to deal with differences between operating systems, distributions, and/or optional or alternative libraries.
I don't really see why anyone would pick autoconf for a modern project.
Cmake is by a wide margin the worst build tool I've used. That covers at least autoconf, gmake, nmake, scons, waf, tup, visual studio, the boost thing, bash scripts and lua scripts. Even the hand edited xml insanity of visual studio caused negligible grief compared to cmake.
I strongly concur. Cmake is incompetently designed and implemented. The authors had no idea how to make a build language, but didn't let it stop them.
Having used both autoconf and cmake, I have a strong preference for autoconf (plus hand written makefiles; never been able to get into automake). It's just easier to use for me, especially when it comes to writing tests for supported functions and adding optional features you want to enable or disable via configure script options.
In my opinion, automake is the weakest part of the autotools chain. Look at this section of the manual for example https://www.gnu.org/software/automake/manual/automake.html#G...: it says that automake doesn't recognize many Gnu Make extension, and can get confused even by weird whitespace...
CMake is really more of a C++ crowd thing, it never won the mindshare with C.
> I don't really see why anyone would pick autoconf for a modern project.
If you build for your system only and never ever plan to cross compile by all means go with static makefile.
A good way to make sure your project won't cross compile is to use Autoconf. Rampant use of Autoconf is the main reason distros gave up on cross compiling and started using QEMU. Developers who use Autoconf and who don't know what cross-compiling is will not end up with a cleanly cross-compiling project that downstream packagers don't have to patch into submission.
Most of my disdain for Autoconf was formed when I worked at a company where I developed a embedded Linux distro from scratch. I cross-compiled everything. Most of the crap I had to fight with was Autoconf projects. I was having to do things like export various ac_cv_... internal variables that nobody should know about, and patching configure scripts themselves. Fast forward a few years and I see a QEMU everywhere for "cross" builds.
The rest of my disdain comes from having worked with the internals of various GNU programs. To bootstrap their build systems from a repository checkout (not a release tarball) you have to follow their specific instructions. Of course you must have the Autotools installed. But there are multiple versions, and they generate different code. For each program you have to have the right version that it wants. If you have to do a git bisect, older commits may need an older version of the Autotools. Bootstrap from the configure system from scratch, the result of which is the privilege to now run configure from scratch. It's simply insane.
You learn things like to touch certain files in a certain order to prevent a reconfigure that has about a 50% chance of working.
Let's not even going to libtool.
The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.
What I want is something different. I want a user to be able to use any platform where the program works to obtain a checkout if exactly what is in git, and be able to make a patch to the configuration stuff, test it and send upstream without installing anything that is not required for just building the program for use.
Autoconf and automake has the best support for cross-compiling there is, everything else is a poor imitation. At least from the perspective of the folks doing Debian's cross-build stuff. With Debian's multi-arch policy, cross-toolchain packages and dpkg-dev/debhelper support for driving common cross-compiling options, plus fixing a ton of edge cases, IIRC more than 50% of Debian packages are now cross-compilable without qemu. Often they are bit-for-bit identical to the native compilation too.
https://wiki.debian.org/CrossCompiling https://crossqa.debian.net/
A build system has great support for cross compilation when downstream package maintainers don't have to lift a finger to make it work, even though the upstream developer has not even tried cross compiling.
> A good way to make sure your project won't cross compile is to use Autoconf.
Yeah well this is not quite true. Most embedded distros leverage autotools heavily. In Yocto you just specify autotools as the package class for the recipe and in most cases it will pull, cross compile and package the piece of software for you with no intervention.
The tools are clearly antiquated, written in a questionable taste and 80% of the cases they solve are no longer relevant. They are still very useful for the rest.
> A good way to make sure your project won't cross compile is to use Autoconf. Rampant use of Autoconf is the main reason distros gave up on cross compiling and started using QEMU. Developers who use Autoconf and who don't know what cross-compiling is will not end up with a cleanly cross-compiling project that downstream packagers don't have to patch into submission.
Cross compilation for distributions is a mess, but it is because of a wide proliferation of build systems, not because of the GNU autotools -- which have probably the most sane way of doing cross compilation out there. E.g., distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...
> The main idea behind Autoconf is political. Autoconf based programs are deliberately intended to hinder those who are able to build a program on a non-GNU system and then want to make contributions while just staying on that system, not getting a whole GNU environment.
Nothing could be further from the truth, GNU autoconf started as a bunch of shared ./configure scripts so that programs COULD build on non-GNU system. It is also why GNU autoconf and GNU automake go such far lengths in supporting cross compilation to the point where you can build a compiler that targets one system, runs on another, and was build on a third (Canadian cross compile).
> distribution have to figure out why ./configure is not supporting --host cause someone decided on writing their own thing ...
1. Most of the time ./configure won't properly support --host isn't because someone wrote their own thing, but because it's Autoconf.
2. --host is only for compilers and compiler-like tools, or programs which build their own compiler or code generation tools for their own build purposes, which have to run on the build machine rather than the target (or in some cases both). Most programs don't need any indications related to cross compiling because their build systems only build for the target. If a program that needs to build for the build machine has its own conventions for specifying the build machine and target machine toolchains, and those conventions work, that is laudable. Deviating from the conventions dictated by a pile of crap that we should stop using isn't the same thing as "does not work".
A few of my personal projects cross compile via static makefile. Is there something wrong with that?
If you're not writing something which compiles its own tools which are then used for the rest of the build, or is not a compiled programming language, all you have to do is respect the CC, CFLAGS, LDFLAGS and LDLIBS variables coming from the distro. Then you will use the correct cross toolchain and libs.
If you need to compile programs that run on the build machine, you should have a ./configure script which allows a host CC and target CC to be specified, and use them accordingly. Even if you deviate a bit from what others are doing, if it is clearly documented and working, the downstream package maintainer can handle it.
I would be excited to use but since it’s using GPLv3 I can’t actually use it for a lot of projects I’d want to make ;-; Is it possible to relicense to LGPL or MPL instead?
In general, people who license something as GPLv3 probably consider that a feature, not a bug.
I mentioned here recently that I released a personal project under the GPLv3. The very first issue someone filed in GitHub was to ask me to relicense it as something more business friendly. I don't think I've been so offended by an issue before. If I'm writing something for fun, I could not possibly be less interested in helping else someone monetize my work. They can play by Free Software rules, or they can write their own version for themselves and license it however they want. I don't owe them the freedom to make it un-Free.
The fact that this is hosted on a FSF-managed service indicates the author likely sees it similarly.
I generally agree but it's worth noting that languages are a bit different. Obviously there are GPL'd compilers but those often make an explicit carveout for things like the runtime and standard library. Meanwhile in the Lisp world my impression is that most (but certainly not all) implementations are permissively licensed in part due to concerns that shipping an image file is essentially shipping the entire language implementation verbatim.
They can always reward the author, which mostly certainly will make a specific business friendly license for them.
Thanks for pointing that option out! Yes, I am a simple man: you can buy any software I've ever publicly released for the right price. I don't know what those prices are in advance because I've never thought of it, but if you want to give me $10M for some tool I wrote so that I can provide generational wealth to my family, drop me a line.
Of course, no one has expressed interest in doing that yet, so this is purely hypothetical.
That totally makes sense and I do appreciate why that would be a problem for some users.
And yet, this is a single-user labor of love by one person hosting it on FSF’s servers. I don't know them, and this is pure conjecture, but I suspect they probably couldn't care less if that made it challenging for commercial users. There are plenty of other Lisps for them to choose from.
Hard to believe this comment could be serious, but nonetheless, for the impartial observers, there is a healthy ecosystem of Common Lisp implementations, from "permissive" open source all the way to (expensive) commercial, proprietary ones.
https://common-lisp.net/implementations
I think a full-featured GPLv3 implementation would be very cool, personally.