Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The reason GCC is not a library (2000) (gcc.gnu.org)
168 points by todsacerdoti 2 days ago | hide | past | favorite | 127 comments




Oh, this is one of my favorite (and sad!) dramas in free software.

Five years later the main llvm developer proposed [0] to integrate it into gcc.

Unfortunately, this critical message was missed by a mail mishap on Stallman's part; and he publicly regretted both his errors (missing the message and not accepting the offer), ten years later [1].

The drama was discussed in realtime here in HN [2].

[0] https://gcc.gnu.org/legacy-ml/gcc/2005-11/msg00888.html

[1] https://lists.gnu.org/archive/html/emacs-devel/2015-02/msg00...

[2] https://news.ycombinator.com/item?id=9028738


I feel like this is a sort of evidence that even for the most serious of engineers email lists are not an ideal way to communicate.

It also speaks to an absolute failure of governance. If I missed an important email on a FreeBSD mailing list, you can bet that a dozen other people would see it and either poke me about it or just go ahead and act upon it themselves.

The fact that RMS missed an email and nobody else did anything about it either is a sign of an absolutely dysfunctional relationship between the project and its leadership.


If I had to guess, the actual GCC maintainers[1] had no interest into integrating a very large codebase into GCC which would duplicate a lot of its functionality.

LLVM could have been integrated under the GNU/FSF umbrella as a separate project of course.

[1] since the egcs debacle was resolved, RMS has had very little control of GCC


So, having been around a lot of different communication methods, I think email lists aren’t ideal, but for serious projects they’re better than all the alternatives.

Chat has a way of getting completely lost. All your knowledge that goes into chat either goes into somebody’s head or it just disappears into the ether. This includes Slack, Discord, Teams, etc. Useful as a secondary channel but serious projects need something more permanent.

Bug tracking systems just don’t support the kind of conversations you want to have about things. They’re focused on bugs and features. Searchability is excellent, but there are a lot of conversations which just end up not happening at all. Things like questions.

That brings us back to mailing lists. IMO… the way you fix it is by having redundancies on both sides of the list. People sending messages to the mailing list should send followup messages. You should also have multiple people reading the list, so if one person misses a message, maybe another gets it.

Mailing lists are not perfect, just better than the alternatives, for serious projects.

(I also think forums are good.)


This is why the D community has forums. The messages are all archived as static web pages and are a gold mine of information.

https://www.digitalmars.com/d/archives/digitalmars/D/


BTW, like HackerNews, the D forums don't allow emojis, icons, javascript, multiple fonts, and all that nonsense. Just text. What a relief!

So it just like email list which have an archive - except you can't use email clients but instead can use web-based interface?

You can use the web interface if you post:

https://forum.dlang.org/

The static pages are there for browsing. They load fast, and I like the entire thread being on one page. No clickety-clicking and forgetting where you are in a forest of postings.


^_^ sucks when you actually need to talk about emoji though :/

Stating the Unicode code points as U+1F4A9 or (D syntax) \U0001F4A9 is a reasonable workaround.

We discourage posts that aren't relevant in some way to D programming.

One of the reasons I enjoy HackerNews is dang's enlightened and sensible moderation policy.


I think OP meant cases like, "I need to process a string with this emoji in D" etc

Would you ever need to talk about a specific emoji?

¯\_(ツ)_/¯

a forum is just a webinterface for a mailinglist.

they are both messages in threads. what's different is the presentation.

some of the forums i use support both seamlessly, and i can choose which interface i prefer.

of course for the issue we are discussing, presentation is what matters. but, for most forums i do not believe they would have made it any easier to not miss a message. just look at hackernews. it is actually quite difficult to track which messages i have not read yet, (even if there is a marker on what's new) and it is therefore very easy to miss some.

that is not the case with email. because my mail client tracks the rad/unread status for each message. the problem with RMS has nothing to do with the form of a mailing list but with his particular async style of accessing the internet.


Piling on about chat. Slack threads are an abomination. They aren’t inline with the main channel so you can’t cut and paste an entire conversation with threads. And does exporting a channel include threads? Who knows because the admin wouldn’t do it for me.

praise https://github.com/rusq/slackdump

it does include threads, and no need for admins


You’ve saved my life!

Threads are amazing as an idea and what you're missing is just an implementation detail in Slack. More platforms should have threads (WhatsApp, etc).

They're really bolted on in Slack though. I've been using Slack since 2019 and they were bolted on and annoying then, and they still are.

Their biggest problem is that nobody uses them, though.


Threads were introduced in, what, late 2016? The start of 2017? Some time around then, anyway. They were even more badly integrated at the start; there was no way to be notified about new messages in a thread, for example. By 2019 things were a little better, but - as you noticed - still not great.

And since then... nothing. No more improvements. Development seems to have more or less halted since the Salesforce acquisition.

I used irc for a couple of decades before Slack, so was happy with purely linear chat (yeah, I'm also one of those weirdos who likes rebasing in git). Threads make everything more horrible, but at this point it feels like I just have to put up with it.


> (yeah, I'm also one of those weirdos who likes rebasing in git).

I like both rebasing and threads. More generally, I hate hate hate the 20+ individual messages on a single topic in a channel. It's just so annoying, threads are great for stuff like that, and give you one target for a reminder/update on whatever the issue is.

And yet, every time I change companies I realise again how much most people just don't threads on Slack.


it's not the existence of threads that are the problem but their presentation. instead of hidden threads i'd prefer to be able to quote an message and have it shown inline. the fact that threads are so hidden is a major reason to avoid using them (for me at least)

alternatives:

discord has inline quoting and threads. treads are a bit more visible.

zulip creates a new thread for every message because it prompts you to set a topic, and then you browse the messages by topic.


Nobody uses them? You need better companies :-)) I've worked at a startup using Slack and a huge company using Slack plus I've seen a bunch of other companies. They were all using threads everywhere, they're incredibly useful, especially for big or active channels.

> Nobody uses them? You need better companies :-))

To be fair, this is hyperbolic. To clarify, small groups within most orgs tend not to use threads in their internal channels AND congratulate everyone on their birthdays. The combination of this irritates me due to repeated notification spam.


You can educate them on this. It's not easy and tact is needed, but it's doable.

Discourse (or other forums) would be my pick and what I’ve driven other projects I’m involved with to success.

They can be treated like mailing lists, but are easy to navigate , easy to search and index, and easy to categorize.


What are the current practical non-self-hosted options for an open source project mailing list? We (portaudio) are being (gently) pushed off our .edu-maintained mailing list server, google groups is the only viable option that I know about, and I know not everyone will be happy about that choice.


Freelists[1] is still around, LuaJIT hosts its mailing list there. So is Savannah[2]. Would also be interesting to know if it’s actually realistic to ask Sourceware[3] to give you a list or if those pages only reflect the state of affairs from two decades ago. (Only the last of these uses public-inbox, which I personally much prefer to Mailman.)

[1] https://www.freelists.org/

[2] https://savannah.nongnu.org/, https://lists.nongnu.org/

[3] https://sourceware.org/mission.html#services


We use Mattermost and it’s working pretty well. The search is decent and it captures a lot more of the daily chitchat that isn’t long/serious enough for email and would otherwise be lost. The real benefit comes when you need to find some snippet that know was discussed 18 months ago and too much water’s passed under the bridge to remember exactly what was said.

Still, we are discussing it almost 30 years after it happened. What alternative messaging system offers such openness and stability? I don't see anything other than publicly archived mailing lists.

I think Mozilla has a bugzilla instance that's been around almost as much, e.g. this is 26 years old

https://bugzilla.mozilla.org/show_bug.cgi?id=35839#:~:text=C...


Bugzilla is good for some things, but terrible for discussions, questions, offers, advice, etc, etc.

JIRA /s

There is no communication method where this isn't possible. Email can be missed, chat can be missed, phone calls can be missed, even talking to someone in person can be missed. All forms of communication can fail such that the person sending the message thinks it was received when it wasn't. So one would need evidence that email is more likely to fail in this respect, rather than evidence it can happen at all, to show that email is a worse communication method.

> All forms of communication can fail such that the person sending the message thinks it was received when it wasn't.

With phone calls? Not that I suggest using calls as a way to manage your project, but at least you typically know that the recepient is there and listening before you transmit.


You can go yell in the others people ear

Sorry...maybe I'm dense. Email has worked for decades. If I don't catch something this relevant in an email forum, why would I automatically, without question, see it and understand its relevance in chat, Slack, etc.

Serious question, since in my experience even specifically assigning someone a Jira tix doesn't guarantee they'll actually look at it and act.


The fault here was entirely Stallman's own. He has some kind of byzantine but ideologically-pure protocol for reading his emails in batches, which he has to request explicitly from someone or something that retrieves them for him.

You can't infer anything from this episode about the suitability or unsuitability of email for any particular purpose.


> He has some kind of byzantine but ideologically-pure protocol for reading his emails in batches,

This caught my eye as well.

I'm not sure what his objection to accessing email in a normal-ish way might be. Any ideas?

My best guess is that it's something surveillance-related, but really not sure.


I think OP might be confusing Stallman's website protocol with that for email:

> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly.

(he describes his arrangements in detail here: https://www.stallman.org/stallman-computing.html)



You could simply fix it by marking unreads bold. Doesn't sound so byzantine now does it?

It's the worst, except for all the others.

20 years ago someone missed an important email.

Every 20 seconds someone misses an important message in a thread hidden deep in a chat.

I don't understand how we have moved from email and IRC to the various chats. The latter seem to actively hide communication, as some deliberate sabotage.


We have moved from IRC “to the various chats”? ... IRC is a chat. What makes IRC special or different in your view apart from being old?

I know that in the old days IRC chats were sometimes “made public” in the sense that a bot would scrape the entire chat and put it on the web. If that's what you're after, there's no technical reason I can think of you can't also do that with Discord except that it's not as trivial to implement because it's not just text and not just a single linear chatroom.

The discussion here is about archivability and searchability and I'm really not sure an IRC log fits that bill any more than a hypothetical Discord log.


> What makes IRC special or different in your view apart from being old?

I don't have a tonne of experience with all the chat offerings, just lots with one of the big ones, but to me the main flaw the new ones seem to have is that they have these "threads". Maybe I'm old and senile ("skill issue"), but if you reply to a message of mine into a thread, there's a 99% chance I'll never see it.

Maybe this is a UI issue, not a skill issue.

I have asked other people how they manage to follow updates in threads in order to see updates, and the answer seems to be that they don't. They just accept that many messages are just never seen by anybody. So it's not just me.

IRC doesn't have this. Starting a new channel, while extremely low effort, is not as integrated in the message flow. So people don't, the way they spawn threads left and right in new chats.

A second reason, in my experience (which may be atypical), is that IRC is seen as obviously not a replacement for a design or an email. But because new chats have more of an illusion of being authoritative rather than ephemeral, more people go "oh the rationale for that is in the discord/slack somewhere", whereas nobody with shame would ever say that about IRC.

> archivability and searchability and I'm really not sure an IRC log fits that bill any more than a hypothetical Discord log.

Yes, the Discord log is much better. But that's one of my points. It's better, so people choose it over something more suitable. So it's not "worse is better", but "better makes worse".


Look, even the most serious of engineers would be thrown off their game and miss an important email if someone offered to buy them a parrot. I assume that’s what happened.

Somebody back in 2015 mentioned a relevant response to Richard Stallman from David Kastrup[1]. It's brilliant.

Did he convince other GCC devs with that post? Mentor younger devs on free software strategy?

What's happened in the ten years since?

C'mon GCC mailing list lurkers-- spill the tea! :)

1: https://lists.gnu.org/archive/html/emacs-devel/2015-02/msg00...


Of course he wishes he had accepted the offer in hindsight, now that GCC is heading towards irrelevance (slowly, but it definitely is).

Doesn't mean he would have accepted it he had seen the message.


A lot of languages are building alternative backends to llvm because it is so slow. How is it so clear that LLVM itself won’t be so relevant in future

What? GCC is absolutely no way whatsoever heading towards irrelevance. In embedded, deskop Linux, and server Linux, almost everything is built with GCC.

Yeah because everything was built with GCC when LLVM was first created and it hasn't displaced them all yet. It will though. All new languages use LLVM as a backend - nobody is using GCC. Every company that writes some kind of custom tooling (e.g. for AI) uses LLVM. Most compiler research is done on LLVM.

It will take a very long time (30 years maybe?) - I did say slowly - but the direction of travel is pretty clear.


LLVM is the playground for new languages and those that want to avoid GPL. But it is also a bloated mess. I personally prefer to invest my (limited) time into GCC, but actually hope that something new comes up. Or rather, that those big frameworks get decomposed into modular tooling around common standards and intermediate languages.

Yeah I agree it's a mess, but it's a mess that you can integrate with and augment fairly easily. There's definitely scope for a cleaner modern replacement (maybe written in Rust?). Absolutely enormous amount of work though so I won't hold my breath.

I hope for a Unix-like system written in C and a good C compiler toolbox. I would happily remove all the other nonsense - including Rust - from my life.

Besides the LLVM drama, we do have a libgcc for jit library now, which uses just the backend to create a better jit than llvm. More speed and more backends.

Did Dave Malcolm contact RMS privately as he said then? I only know about https://gcc.gnu.org/legacy-ml/gcc-patches/2013-10/msg00228.h...

Why did RMS back down then? He still opposes a ffi for emacs, even if behind the scenes there is now a ffi in emacs for gtk.


Is libgccjit used a lot? I think I have only seen it in emacs.

And poke.

But emacs is big enough. Cannot get much bigger


Strange philosophy, imo. Feels very much like saying "My version of free is best, and I must force you to implement it yourself".

Stallman's version of free is free to the end user. He cares more about whether the end user will have access to the source code and means to modify their software to remove any anti-feature, and less about whatever freedoms the developers of said software would want (such as, the freedom to close the source and distribute only binaries)

Ultimately Stallman was against a kind of digital feudalism, where whoever developed software had power over those that didn't


I've always thought of it as: Stallman wants the code itself to enjoy freedom, more than caring about the freedom of the people who create and use that code.

> free to the end user

To which none can answer how it creates freedom without mass adoption to actually get the software into end users' hands. The great contradiction in FSF philosophy is to create highly pure software within a monastery of programmer-users while simultaneously insisting to focus on end-user freedoms without reconciling programmer incentives to build what these end users need.


I'm responding to this comment as an end user with a free browser and free OS and it works perfectly fine. Billions of users do in fact.

So there doesn't need to be an answer. He can just show it to you.


Judging by "billions of users", it sounds like you mean Android, in which case neither the browser nor the OS are really free in FSF sense.

You are correct that FSF does not consider the entire system at large as free. AOSP and WebKit are certainly really free in the FSF sense, but sure, almost all Android distributions in practice also contains some non-free software in addition to the free stuff, critically the firmware blobs for radio and other chipset drivers. In principle, you can get a fully free Android to browse the web over WiFi if you have an appropriate hardware; the code for all parts of the stack to do so is available. Things like GrapheneOS come close. Most users can install a fully free browser inside regular Android too (same on iOS, Windows, Mac and desktop Linux). Realistically, the biggest hurdle in the user freedom for Android is not the non-free software, but the devices employing signature verification in hardware/bootloader (i.e. Tivoization).

How much of it is GPL3?

Why would it matter in this context; the GP was asking a theoretical question akin to "how is it physically possible for the sky to be blue?" and I am just pointing at the sky saying "look!"

It is Free Software whether it is BSD or GPL3. By all measures, Free Software as originally envisaged has been a massive success. It's just the goalposts have expanded over the years.


> It is Free Software whether it is BSD or GPL3.

You clearly did not read the FSF manifestos and don't understand their positions. They will call the BSD license "permissive" and will correct you if you attempt to call BSD "free/libre".

> Why would it matter

The FSF didn't build "open source." They actively work to discredit open source. Let's not give them credit for what they tirelessly denounce.

Linux is open source, but did not adopt the GPL3. Firefox is open source but uses MPL. If the FSF is a leader who is responsible for all of these great projects, why doesn't anyone want to use their license?


> ...will correct you if you attempt to call BSD "free/libre".

Wrong. https://www.gnu.org/philosophy/categories.en.html

> The FSF didn't build "open source." They actively work to discredit open source. Let's not give them credit for what they tirelessly denounce.

Where did I ever use that term in this conversation?


> Where did I ever use that term in this conversation?

You didn't because you glossed over my central point. Open source's success in the last twenty years came in spite of the FSF, not because of it.


Assuming LGPLv3 and AGPLv3 count too - quite a lot, on both my laptop and phone. Most of the core utilities, pretty much entire system UIs, most applications I run... The MPL browser I use on the laptop and GPLv2 kernel are probably the most notable exceptions, I guess.

And yet despite your theory it appeared to work quite well in practice.

If reality disproves your theory, it's not reality that's wrong.


Open source works. FSF tactics for producing and promoting free/libre do not. Let's not give the FSF credit for what open source does.

> FSF tactics for producing and promoting free/libre do not.

What is your criteria for judgement here? The FSF GPL licenses, in reality have worked quite well, if the criteria is longevity, high usage, popularity, utility and maintained.

If your only criteria is "Well, they're only #2", then sure, by that criteria they did not "work".


Just look at LLVM and GCC, the central subject here. GCC is hanging on while entire ecosystems build on top of LLVM, primarily for technical reasons. What started off as insularity lead to technical weakness. Technical weakness will end in obscurity. What is free after that?

> GCC is hanging on while entire ecosystems build on top of LLVM, primarily for technical reasons.

What are you talking about? GCC usage is well ahead of LLVM usage.

And even if it wasn't ahead, it's still the default compiler for almost every production micro controller in use.

GCC is the default on most deployed systems today.


Git contributors:

LLVM: 5k GCC: 1k

This is not what sustainability looks like:

https://trends.google.com/trends/explore?date=today%205-y&q=...

The crossover will more likely be driven by silicon trends providing an opportunity for LLVM's velocity to translate to enough competitive advantage for casual users to want LLVM. Once that happens, you will see some Linux distributions switch over. Hard liners will fork and do what they do, but asking people to use a compiler that gives them a worse result or is harder to work with isn't going to hold the gates.

Linus prefers LLVM for development.

People need to get out of the 90s and look at some data.


Right. That could happen in the future, but your assertion was that it had already happened, and you used that assertion as support for why the GPL already resulted in lower use.

I can't see the future, but I can tell you without a doubt that, as things stand right now, the GPL has been a runaway success for users' rights.

Will that change in the future? Who knows? But that wasn't your claim nor my counterclaim.


Not that strange, as GCC was an effort to a goal of developing an ecosystem of Free (as in speech) software. While FSF had sometimes made allowances for supporting non-Free software (whether non-copyleft open source or proprietary), these were always tactics in support of the longer-term strategy. Much like you might spend marketing funds on customer acquisition in the service of later recurring revenue.

As RMS indicated, this strategy had already resulted in the development of C++ front ends for the Free software ecosystem, that would otherwise likely not have come about.

At that time the boom in MIT/BSD-licensed open source software predominantly driving Web apps and SaaS in languages like Rust and Javascript was still far away. GCC therefore had very high leverage if you didn't want to be beholden to the Microsoft ecosystem (it's no accident Apple still ships compat drivers for gcc even today) and still ship something with high performance, so why give up that leverage towards your strategic goal for no reason?

The Linux developers were more forward-leaning on allowing plugins despite the license risks but even with a great deal of effort they kept running into issues with proprietary software 'abusing' the module APIs and causing them to respond with additional restrictions piled atop that API. So it's not as if it were a completely unreasonable fear on RMS's part.


Nit: non-copyleft open source is still free software (as defined by FSF).

"My version of Free is best" is like the defining feature of GNU/FSF.

(Not knocking them, i think sometimes being obnoxiously stubborn is the only way to change the world)


True. Some of their positions come across as "extreme" and rms' personality can be quite abrasive especially these days when even much smaller incidents are amplified by social media.

However, I quite value their stand. It's principled and they are, more or less, sincere about it. Many of their concerns about "open source" (as contrasted to free software) being locked up inside proprietary software etc. have come true.


Historical context is not merely important, it is indispensable.

The statement in question was issued during a period in which software vendors routinely demanded several hundred — and in some cases, thousands — of dollars[0] for access to a mere compiler. More often than not, the product thus acquired was of appalling quality — a shambolic assembly marred by defects, instability, and a conspicuous lack of professional rigour.

If one examines the design of GNU autoconf, particularly the myriad of checks it performs beyond those mandated by operating system idiosyncrasies, one observes a telling pattern — it does not merely assess environmental compatibility; it actively contends with compiler-specific bugs. This is not a testament to ingenuity, but rather an indictment of the abysmal standards that once prevailed amongst so-called commercial tool vendors.

In our present epoch, the notion that development tools should be both gratis and open source has become an expectation so deeply ingrained as to pass without remark. The viability and success of any emergent hardware platform now rests heavily — if not entirely — upon the availability of a free and competent development toolchain. In the absence of such, it shall not merely struggle — it shall perish, forgotten before it ever drew breath. Whilst a sparse handful of minor commercial entities yet peddle proprietary development environments, their strategy has adapted — they proffer these tools as components of a broader, ostensibly cohesive suite: an embedded operating system here, a bundled compiler there.

And yet — if you listen carefully — one still hears the unmistakable sounds of malcontent: curses uttered under breath and shouted aloud by those condemned to use these so-called «integrated» toolchains, frustrated by their inability to support contemporary language features, by their paltry libraries, or by some other failure born of commercial indifference.

GNU, by contrast, is not merely a project — it is a declaration of philosophy. One need not accept its ideological underpinnings to acknowledge its practical contributions. It is precisely due to this dichotomy that alternatives such as LLVM have emerged — and thrived.

[0] Throw in another several hundreds for a debugger, another several hundreds for a profiler and pray that they are even compatible with each other.


Yeah, people trying to enforce their ideals upon others. What a strange thing indeed.

They don't "enforce" anything on anybody. Participating in the ecosystem was always and still is a free choice.

And the result is that most new open source languages (and commercial companies) use LLVM instead of GCC as the backend => way more engineering resources are dedicated to LLVM.

For what it's worth, the leverage did work, just not forever. It was a play with a limited lifetime. It didn't necessarily need to shake out that way, probably if GCC was slightly easier to write for but not too easy people would have invested more. It took a major investment to create a competing product.

I thought GPLv3 adoption by GCC was what really lit the flames on moving to llvm by commercial entities?

you only need to worry about GPLv3 if you are modifying gcc in source and building it and distributing that. Just running gcc does not create a GPLv3 infection. And glibc et al are library licensed so they don't infect what you build either, most especially if you are not modifying its source and rebuilding it.

And what we've seen from e.g. Apple is that "make a private fork and only distribute binaries" is exactly what they wanted the whole time.

you only need to worry about GPLv3 if you are modifying gcc in source and building it and distributing that.

That's the context here. If you build a new compiler based on GCC, GPL applies to you. If you build a new compiler based on LLVM it doesn't.


the context here doesn't actually specify whether we are talking about companies using llvm sources to create proprietary compilers (or maybe integrated with a proprietary IDE) or using llvm to quickly bootstrap and craft a compiler for a new processor, new language, etc., where they will distribute the source to the compiler anyway

but such a compiler or IDE would not GPLv3 infect it's users' target sources and binaries.


The main problem with GPLv3 specifically from the perspective of various commercial vendors is the patent clause.

Still some companies try hard to avoid GPLv3, see Apple, who either provide old GPLv2 licensed software or invest in BSD/MIT replacements.

You might know this history better than me.

GCC has come a long way in terms of features and complexity since the 90's/00's when Stallman made these decisions. Today, building a compiler from scratch would be a huge undertaking, and would be prohibitively expensive for most organizations regardless of licensing.

If the requirement was still just to implement a "simple" C89 compliant compiler, and I was worried about software freedom. The GPL is probably still a good bet.


I'm not sure that's the only reason. In recent years a lot of projects have chosen to avoid the (l)GPL and use more permissive licences to try and reach a larger audience that might have been spooked by free software.

This gave LLM a leg up too.


They can do this because they have a choice. Apple cleaned itself of the GPL once it could, after a long stint where it couldn't. Had GCC been the library backend standard instead of LLVM, the world would have a lot more GPL in its compilers.

> They can do this because they have a choice. Apple cleaned itself of the GPL once it could, after a long stint where it couldn't. Had GCC been the library backend standard instead of LLVM, the world would have a lot more GPL in its compilers.

I don't think this is a valid assumption. If the root cause was a refusal to adopt GPL software, I think it's rather obvious that in a universe where LLVM doesn't exist, companies such as Apple will still diverte their resources to non-LLVM software.

Apple is also not resource-constrained or a stranger to develop compilers and their programming languages. Does anyone believe that GCC is the only conceivable way of developing a compiler?

There's a lot of specious reasoning involved in this idea that LLVM is the sole reason some companies don't adopt GCC. The truth of the matter is that licenses do matter, and if a license is compatible with their goals then companies can and will contribute (LLVM) whereas if it isn't (GCC) companies will avoid even looking at it.


And yet, LLVM is thriving, and not desolately crying for proprietary commercial improvements to be fed back by their creators. It's an odd balance, sometimes it works out, this seems to be such a case.

Let's be real here.

A lot of this is driven by FAANG or FAANG wannabes, companies at a scale where they can basically reproduce huge chunk of OSS infrastructure.

They also put out a lot of Open Source with they don't want to license as GPL due to a general fear of GPL contamination.

Most of this is huge corporation driven.


There is also the case of the more liberally licensed SRC Modula-3 compiler (front end) which worked around the GPL by running GCC as a separate process and feeding it IR files. Less efficient, but effective.

It's always an option if you're willing to put up with the awkwardness and inefficiency. GPL has more in common with DRM than you would think.


In retrospective, I think this came out for the better in case of LLVM, and probably for GCC too. After all, both compilers emit ~equally optimized code today.

More languages choose LLVM as their primary backend, like Rust, Crystal, Julia.

What about peripheral packages for the GCC library? The compiler specifies objective C in GPL for front end architecture.

Freedom through obscurutiy

Stallman is such a deep thinker. I think he doesnt get nearly as much credit as he deserves.

I don't think this shows deep thought on his part.

By Stallman's own telling a free Objective-C frontend was an unexpected outcome. Until it came up in practice he thought a proprietary compiler frontend would be legal (https://gitlab.com/gnu-clisp/clisp/blob/dd313099db351c90431c...). So his stance in this email is a reaction to specific incidents, not careful forethought.

And the harms of permissive licensing for compiler frontends seem pretty underwhelming. After Apple moved to LLVM it largely kept releasing free compiler frontends. (But maybe I'd think differently if I e.g. understood GNAT's licensing better.)


This does not really matter, there is really worse.

This real apocalypse did happen upon making gcc a c++ project. This is probably one of the biggest mistakes in open software ever.

There were rumors which do match the timing: the media labs (and gcc steering "committee") at MIT being fiddled auround by B. Gates via Epstein (yes, the one you are thinking about). Are those rumors true? Well, there is "something", but "actually what"? All we know is RMS had to "disappear" for a little while... and that was not for health issues and probably to avoid being splash by the MIT 'media labs' affair.

Open source is not enough anymore, we (all of us) need _lean_ open source, which excludes de facto ultra-complex syntax computer languages (c++ and similar), that to foster real-life alternatives in the SDK space.

gcc is now not much better that closed source software.


I can't comment on the rumors, but at least since I started contributing to GCC a decade ago and I suspect for much longer, RMS certainly is not involved with GCC at all and he does not have any influence on the project in any way.

I agree that moving to C++ was a mistake, and I agree that for software to be truly free it has to be lean, so that users can meaningfully contribute. Apocalypse is certainly exaggerated, as GCC is still something that is fairly accessible to newcomers. But it certainly reflects the general and unfortunate trend of making things too complex.


Yeah, I used ironically the apocalypse word, that to highlight the fact the damage is seriously high. I often push the envelop to call gcc/clang "backdoor generators" (even though nowadays, it seems it is more likely to be hidden in the supply chain/SDK). For me gcc is in /opt, as it's over: this is "open source" not worth more than a binary blob... which generates machine code for EVERYTHING... baw... hurt.

And it seems we are more and more to understand how critical _lean_ is for open source software (and file formats/network protocols), _including the SDK namely the computer languages_. Real-life alternatives or deep customizations are way harder to build without _lean_ software.

A good example: web "app" and web "site". web "app" is javascript requiring one of the massive whatng cartel web engines. web "site" is classic or noscript/basic (x)html (aka basic 2D/table based HTML forms only, with a clean CSS/presentation split).

Also, I should be able to bootstrap, and that reasonably, a full elf/linux based OS with a simple C compiler (assembler+linker), for instance cproc/scc/tinycc/etc. gcc being now c++ broke everything, and I cannot believe the MIT media lab/"gcc steering commitee" did not know that, and the rumors of B.Gates interfering at "MIT medialab" via Epstein funding would explain open source quality obvious sabotage decision making.

All that said, I think it is game over, and we should move toward RISC-V assembly written programs sided with high-level languages (python/ruby/shell/javascript only engine/etc) with RISC-V assembly written interpreters, with a very conservative usage of macro preprocessing (because some preprocessors may reach c++ and similar complexity and that would not be much better than the current state). I am currently coding basic rv64 assembly (no pseudo-instructions, only core ISA, using a basic C preprocessor). I have a littele rv64 interpreter to run them on x86_64... well, until the programs stay simple enough.


I don't see why you think the gcc steering committee is related to MIT and the rest also sounds more like a conspiracy theory.

In any case, there is still a path open for bootstrapping via TCC / GCC 4.8 ...

C++ - in general - is just the mistake of the overall trend in the industry to add overly complex nonsense.


Still waiting for the mythical day WG14 will manage to standardize safe arrays and strings types on the standard library as C++.

We are not talking about that actually.

Actually we are, because that is one of the reasons why C++ should be used instead of C.

Although ideally both should have been replaced by now.


I would say that C++ pulls in so much entropy, that having standardized array and string types does not really make up for it. And the standardized arrays are not even safe in C++: https://godbolt.org/z/Y4f4v8M3z

Because you ignored on purpose the at() method.

https://godbolt.org/z/6PY1ve8ev

Or to use one of the beloved compiler extensions,

https://godbolt.org/z/6KKjcM7bP

Additionally these extensions are now part of C++ as soon C++26 gets ratified, given that P3471R4 has been accepted.

Whereas in C that isn't even an option.


I haven't seen at() used a lot and simply using C++ and C would not make your programs magically use at() instead []. P3471 is not required to be supported by an implementation, and bounds checking extensions also exist for C.

Likewise, I am yet to see such extensions for C being used in the wild, especially as they require extensions that no one is using unless their boss tells them so.

However the big difference is that WG21, at least is making something about it.

On C land, radio silence and worse, clever ideas like VLAs on the stack, naturally also without bounds checking.


Bounds sanitizer with trapping is used in the wild by important projects, e.g. the Linux kernel on various mobile devices.

WG14 is also working on such topics, so this is more your ignorance speaking. But complaining on the internet about the work of volunteers is rather bad style anyway, IMHO.

I am not sure what VLA on the stack have to do with it, but VLAs certainly enable bounds checking when used. The more important thing is the type system part.


LLVM is a C++ project from the start and has 5x more contributions than GCC, we should have moved on from C long time ago.

Indeed, LLVM was bad right from the start...

Apparently the world of compiler vendors and university researchers don't agree with.

LLVM is the only open source project that rivals with the Linux kernel in the amount of contributions.

https://www.phoronix.com/news/LLVM-Code-Activity-2024


academics does not mean honest and good anymore.

Why should it?



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: