Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why the open social web matters now (werd.io)
219 points by benwerd 1 day ago | hide | past | favorite | 135 comments




Maybe this was more of an intro/pitch to something I already support, so I wasn't quite the audience here.

But I feel that talking about the open social web without addressing the reasons current ones aren't popular/get blocked doesn't lead to much progress. Ultimately, big problems with an open social web include:

- moderation

- spam, which now includes scrapers bringing your site to a crawl

- good faith verification

- posting transparency

These are all hard problems and it seems to make me believe the future of a proper community lies more in charging a small premium. Even charging one dollar for life takes out 99% of spam and gives a cost to bad faith actors should they be banned and need another dollar to re-enter. Thus, easing moderation needs. But charging money for anything online these days can cause a lot of friction.


In my opinion, both spam and moderation are only really a problem when content is curated (usually algorithmically). I don't need a moderator and don't worry about spam in my RSS reader, for example.

A simple chronological feed of content from feeds I chose to follow is enough. I do have to take on the challenge of finding new content sources, but at least fore that's a worthwhile tradeoff to not be inundated with spam and to not feel dependent on someone else to moderate what I see.


That's just means you're effectively acting as a moderator yourself, only with a whitelist. It's just your own direct curation of sources.

And how did you discover those feeds in the first place? Or find new ones?

I know people have tried to have a relatively closed mesh-of-trust, but you still need people to moderate new applicants, otherwise you'll never get any new idea of fresh discussion. And if it keeps growing, scale means that group will slowly gather bad actors. Maybe directly by putting up whatever front they need to get into the mesh or existing in-mesh accounts. Maybe existing accounts get hacked. Maybe previously-'good' account-owning people have changed, be it in opinion or situation, to take advantage of their in-mesh position. It feels like a speedrun of the internet itself growing.


> That's just means you're effectively acting as a moderator yourself, only > with a whitelist. It's just your own direct curation of sources.

That's exactly how a useful social information system works. I choose what I want to follow and see, and there's no gap between what moderation thinks and what I think. Spam gets dealt with the moment I see something spammy (or just about any kind of thing I don't want to see).

This is how Usenet worked: you subscribed to the groups you found interesting and where participants were of sufficient quality. And you further could block individuals whose posts you didn't want to see.

This is how IRC worked: you joined channels that you deemed worth joining. And you could further ignore individuals that you didn't like.

That is how the whole original internet actually worked: you were reading pages and using services that you felt were worth your time.

Ultimately, that's how human relationships work. You hang out with friends you like and who are worth your time, and you ignore people who you don't want to spend your time with, especially assholes.


>This is how Usenet worked: you subscribed to the groups you found interesting and where participants were of sufficient quality. And you further could block individuals whose posts you didn't want to see.

Your explanation actually proves why USENET doesn't work anymore because that client-side moderation is unusable these days. I was on Usenet in the 1980s before the WorldWideWeb in 1993 and continued up until 2008.

Why did I quit Usenet?!? Because it worked better when the internet was much smaller and consisted of universities federating NNTP servers. But Usenet's design can't handle the massive growth of the internet such as commercial entities being allowed to connect in 1992 and "The Eternal September" of massive users from AOL. Spam gets out of control. Signal-to-noise ratio goes way down. Usenet worked better in a "collegial" atmosphere of a smaller internet where it's mostly good actors. It's fundamental design doesn't work for a big internet full of bad actors.

This is why a lot of us ex-Usenet users are here on a web forum that's moderated instead of a hypothetical "nntp://comp.lang.news.ycombinator" with newsgroup readers. With "https://news.ycombinator.com", I don't need to do extra housekeeping of "killfiles" or wade through a bunch of spam.

Whatever next gen social web gets invented, it cannot work like Usenet for it to be usable.

>Spam gets dealt with the moment I see something spammy

Maybe consider you're unusual with that preference because most of us don't want our eyeballs to even see the spam at all. The system's algorithms should filter it out automatically. We don't want to impose extra digital housekeeping work of "dealing with spam" ourselves.


I think most users that have not ran the systems themselves really have no clue how bad spam really is. It can quickly spiral to the point were 99.9% of the incoming posts on a system are spam, porn where it doesn't belong, or otherwise illegal content. Simply put even if you as the user filter 99.5% of the spam the system is still majority spam.

IP blocks and initial filtering typically make a massive difference in total system load so you can get to the point that the majority of the posts are 'legitimate'. After that bot filtering is needed to remove the more complex attacks against the system.


You are right./ignore is all the mod you need.

Incorrect when accounts are free. Usenet providers are forced to police users changing their email addresses or signing up multiple times, or else they get de-peered. IRC networks do IP address bans.

For at least the past 20 years, Usenet has been so full of spam that it’s been made virtually unusable. If de-peering is an option, then why haven’t the providers that allow spammers to operate gotten de-peered?

Most spam was from Google. Google was kicked off Usenet last year or the year before that.

> That's just means you're effectively acting as a moderator yourself, only with a whitelist

Agreed, though when you are your own moderator that really is more about informed consent or free will than moderation. Moderation, at least in my opinion, implies a third party.

> And how did you discover those feeds in the first place? Or find new ones?

The same way I make new friends. Recommendations from those I already trust, or "friend of a friend" type situations. I don't need an outside matchmaker to introduce me to people they think I would be friends with.


Well, then the risk is that you build your own bubbles of likeminded people. Maybe that's all you need, may be not.

Anecdotal, but I feel like I've done a better job curating diverse opinions in my feed than any algorithm has.

Surely an algorithm focused on that would best me, but the only ones out there today are only motivated by selling ad space and data. Its a bit of an unfair fight today since that isn't my goal, but I don't expect anyone to fund a social media platform with similar goals to mine in their algorithm.


There is much more diversity of perspective among friends and colleagues than there is in my algorithmic social media feed. This is the whole problem: we no longer see perspectives we respect but don't share.

> you're effectively acting as a moderator yourself

Honestly, that's how things should work. People should simply avoid, block and hide the things they don't like.


If 99% of what I see on the platform is stuff I have to block, if I have to spend half an hour every day blocking stuff, I'm quitting the platform.

That isn't a problem if your feed is filled only with content from those you chose to follow.

If you hit follow and 99% of your feed should be blocked, unfollow them and move on.


ActivityPub allows one to follow hashtags in addition to accounts. Pick some hashtags of interest, find some people in those posts to follow. Lather, rinse, repeat.

ActivityPub has no provision to follow a hashtag - that's a local server feature.

This comment was rate limited.


I think it's the act of creating an access point that allows posting when you get spam, not necessarily if it's curated. Your email isn't a curated feed but it will get tons of spam because people can "post" to it once they get your address. Sane with your cell phone number and your physical mailbox.

Since a community requires posting and an access point, spam is pretty much inevitable.


Yeah I'd agree with that. In addition to being a list of content I subscribed to, an RSS feed benefits from being pull based. Email is push based, that breaks the self-moderation model

A simple chronological feed of content is not social media though. That's just reading authors who you like.

I think you are restricting social media by defining as what it became (at the time driven by "eyeball" metrics), instead of defining it by what it could or should be.

Well that depends on how we define social media. Facebook started out as a chronological feed, did it only become social media once it began algorithmically curating users' feeds?

I think it became social media when it enabled two-way/multi-way messaging, if that wasn't there from the start. If it was originally just a feed of posts, yeah it wasn't really social media, it was just another form of blogging.

IIRC twitter was originally called a "micro-blogging" platform, and "re-tweeting" and replying to tweets came later. At that point it became social media.


Media outlets are often one-way though. I can't message news networks on TV and at best their sites may have a comment section enabled. They're still media, and if I can similar see content from my peers that seems to check the "social" box at least in my opinion.

Something like RSS doesn't work for direct messages, but it does still allow for you and I to post to our feeds. Nothing stops it from going a step further and acting much like twitter, we all post to our own site but they can be short messages and they can reference a post on someone else's site as "replying to" or similar.


blogs often have a place for comments. twitter was a microblog that elevated comment replies to "first class tweet status" as a continuation of the microblog idea

Oh. Do you think I have to read authors I don't like so I can beat them arguing over internets? Ok.

Yeah that’s what social media was 10 years ago. It was better, more like a big sprawling group chat than a stream of engagement bait.

It's not better as demonstrated by the revealed preferences of a vast majority of the users. People DO want algorithmic feeds and NOT chronological feeds. It's a common narrative here that everyone wants chronological feeds, but it's not true just like the claim "Everyone wants small phones". People say one thing and do another.

That's an idiotic argument against chronological feeds. The better argument is that high-frequency posters will bury the once every other month poster.

I tend to get tired of high-frequency posters and unsubscribe or find a way to wall them off in a separate feed.

I think moderation only works when individuals have the agency to choose for themselves what content/posts they see. Mastodon/fediverse sets a good example here - there is “general safety and theme” guards at instance level but whether you see “uspol” in your timeline or just posts of cat pics is entirely up to you.

Contrast this to the “medias” like Threads, Bluesky, etc - moderation becomes impossible just because of the sheer scale of it all. Somehow everyone feels compelled to “correct someone who is wrong” or voice an opinion even when the context does not invite one. This is just a recipe for “perpetual engagement”, not actual platform for social interaction (networking).


As someone who worked on a fedi platform, I really appreciate those words.

Some UX decisions even attempt to "passively moderate" content, which unfortunately also deter some from actively using the platform as they don't get the as much the feel for the "crowd". For example, not showing the amount of "likes" of a post unless you interact with said post goes a long way in preventing mob-like behaviour.

Little stuff like this adds up. But it is hard to sell...


I suspect that moderation is something that “AI” may eventually be quite good at.

But human or machine, open or closed, moderation will always be biased. Each community will have its culture, and moderation will reflect that.

A community of Nazis will moderate out stuff that many places would welcome. Some communities would moderate anything not cats on Roombas.

HN is one of the best-moderated communities I’ve ever seen, yet, it has its biases. Organic moderation reflects cultural biases. I’m not always happy about what gets put on the chopping block, but I usually can’t argue with it, even if I don’t like it, or am cynical about why it’s nuked. I stick with HN, because, for the most part, I don’t mind what gets moderated. The showdead thing lets me see what gets nuked, but I usually like to leave it off. I’m not really one for gazing into the toilet.

The main thing that an open fediverse can bring, is transparent moderation, so folks will know when stuff is being blocked, and can use that knowledge to decide whether or not to remain in the community, or advocate for change.


> Contrast this to the “medias” like Threads, Bluesky, etc - moderation becomes impossible just because of the sheer scale of it all.

Wut ? Moderation at Bluesky is fantastic: users build their block lists and share them for others to subscribe to - moderation à la carte... Power to the users !


More like hermetic narrative security at scale.

That would change for the better when BlueSky itself only manages legal prohibitions and lets Everything Else be an optional layer.

While the hermetic narrative security would still be there to split people, it would only split by optional layers.


I had two accounts banned from BlueSky and they didn't say why. One was parodying Donald Trump so fair enough if they don't want content like that, and they told me it was banned for impersonating Donald Trump. The other, no idea at all because I don't think I even tweeted anything very controversial, and the email was just a very generic "you violated terms of service". My third account was not banned, but I don't use BlueSky any more. It's not a ban-evasion ban, since they're logged in together in the same web browser, with the menu to switch accounts active, and yet my third account was not banned.

My point of sharing this info is that BlueSky is not a user-driven moderation system. It arbitrarily and centrally bans accounts, just like Twitter.


You're right, Bluesky moderation is centralized. Unless content is served p2p, some moderation has to be centralized. At the end of the day, there's a server serving content and that server operator is legally obligated to remove illegal material.

Hopefully, atproto + community will provide alternatives for moderation services. Work is being done on this, we'll see what we end up getting. I feel that a competitive ecosystem of moderation services is probably the best answer we can hope for to that inherently messy problem.


Having worked on the problem for years, decentralized social networking is such as tar pit of privacy and security and social problems that I can't find myself excited by it anymore. We are clear what the problems with mainstream social networking at scale are now, and decentralization only seems to make them worse and more intractable.

I've also come to the conclusion that a tightly designed subscription service is the way to go. Cheap really can be better than "free" if done right.


If I have to pay you to access a service, and I'm not doing so through one of a small number of anonymity-preserving cryptocurrencies such as Bitcoin or Monero, then the legitimate financial system has an ultimate veto on what I can say online.

It does if you don't pay to access the service as well, because the financial system is the underpinning of their ad network.

Even in a federated system, you can be blacklisted although it does take more coordination and work.

i2p and writing to the blockchain are an attempt to deal with that through permanence, but those are not without their own (serious) problems.


I think you misunderstood op. Visa controls your free speech with regulatory pressure.

No I understood that.

Payment processors underpin ad systems and they have strong leverage to pressure ad buyers and can pull your ability to make those sales. That's on top of the advertisers themselves having strong positions on what kind of content they want to advertise beside.

Everyone has to pay for servers somehow. Especially at scale. And doing that without payment processors is difficult. Crypto has not proven itself to be something consumers will use.

In all reality, the solution to as much free speech as possible on a social platform is to limit reach. If people want to broadcast to millions or even billions, then of course that will come with limitations and restrictions. Everyone has to balance the varied interests required to achieve scale. Limiting individual reach means more potential freedom for users.


It's unfortunate, and I don't necessarily want to say decentralization isn't viable at all. But I only see decentralization at best address the issue of scraping. It's solving different problems without necessarily addressing the core ones needed to make sure a new community is functional. But I think both kinds of tech can execute on addressing these issues.

I'm not against subscriptions per se, but I do think a one time entry cost is really all that's needed to achieve many of the desired effects. I'm probably in the minority as someone who'd rather pay $10 one time to enter a community once than $1-2/month to maintain my participation, though. I'm just personally tired of feeling like I'm paying a tax to construct something that may one day be good, rather than buying into a decently polished product upfront.


For the record, people working on decentralization should not stop working on it. For myself, I have moved on to other approaches with different goals, but it's a worthwhile endeavor and if anyone ever cracks it, it'll change the damn world. And the people working on it understand exactly how difficult it is, so nothing I say is news to them. But everyone should be clear-eyed about it. It's not a panacea, it's complicated on much more than a technical level and it's already incredibly complicated on a technical level.

And even if it works, there will still be carry-over of many of the problems we've seen with centralized social networks.


How do you decentralize a network that relies on dictionary semantics, the chaos of arbitrary imagery, basics of grammatically sequence signals?

It's oxymoronic. Our communication was developed in highly developed hierarchies for a reason: continual deception, deviance, anarchism, perversion, subversion always operating in conflict and in contrary to hierarchies.

Language is not self-organizing, signaling is not self-learning it self-regulating. The web opened the already existing pandora's box of Shannon's admittedly non-psychologically relevant info theory and went bust at scale.


<<There's glory for you!>>

Yeah kind of agree. Decentralised protocols are forced to expose a lot of data which can normally be kept private like users own likes.

Dunno necessarily if they are _forced_ to expose that data.

Something like OAuth means that you can give different levels of private data to different actors, based on what perms they request.

Then you just have whoever is holding your data anyway (it's gotta live somewhere) also handle the OAuth keys. That's how the Bluesky PDS system works, basically.

Now, there is an issue with blanket requesting/granting of perms (which an end user isn't necessarily going to know about), but IMO all that's missing from the Bluesky-style system is to have a way to reject individual OAuth grants (for example, making it so Bluesky doesn't have access to reading my likes, but it does have access to writing to my likes).


In a federated system, the best you can do is a soft delete request, and ignoring that request is easier than satisfying it.

If I have 100 followers on 100 different nodes, that means each node has access to (and holds on to) some portion of my data by way of those followers.

In a centralized system, a user having total control over their data (and the ability to delete it) is more feasible. I'm not saying modern systems are great about this, GDPR was necessary to force their hands, but federation makes it more technically difficult.


The ability to fully delete your posts on any platform is an illusion anyway, as e.g. a local politian found out.

I don't really see my posts remaining in people's RSS readers after deletion as a problem. It's a fundamental property of information distribution as far as I am concerned.


That's not entirely true. The big platforms yes, but that's a combination of economic incentives and technical challenges at scale (such as moving data to cold storage).

But even then, that means there's resistance, but that's not the same as things being technically impossible. In federated systems, a delete is not a delete. It can't be because there's no way to confirm deletion on nodes you can't control.

And I understand your perspective as a realist on deletion generally, but that's not most social media users understanding when they're told they can control their own data, which is a common selling point of federation.

A centralized system which is properly incentivized to completely wipe all data associated with an account will be able do so, but a federated system can't.


[flagged]


I'm a consultant that builds for startups. I'm not an entrepreneur myself.

If I were to build something like this, I'd use a services non-profit model.

Ad-supported apps result in way too many perverse economic incentives in social media, as we've seen time and time again.

I worked on open source decentralized social networking for 12 years, starting before Facebook even launched. Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.


>Decentralization, specifically political decentralization which is what federation is, makes the problems of moderation, third order social effects, privacy and spam exceedingly more difficult.

I disagree that federation is "specifically political decentralization" but how so?

You claim that decentralization makes all of the problems of mainstream social networking worse and more intractable, but I think most of those problems come from the centralized nature of mainstream social media.

There is only one Facebook, and only one Twitter, and if you don't like the way Zuckerberg and Musk run things, too bad. If you don't like the way moderation works with an instance, you don't have to federate with it, you can create your own instance and moderate however you see fit.

This seems like a better solution than everyone being subject to the whims of a centralized service.


To clarify, I don't mean big P Politics, I mean political in the sense that each node is owned and operated separately, which means there are competing interests and a need to coordinate between them that extends beyond the technical. Extrapolated to N potential nodes creates a lot of conflicting incentives and perspectives that have to be managed. And if the network ever becomes concentrated in a handful of nodes or even one of them which is not unlikely, then we're effectively back at square one.

| if you don't like the way Zuckerberg and Musk run things, too bad

It's important to note we're optimizing for different things. When I say third-order social effects, it means the way that engagement algorithms and virality combine with massive scale to create a broadly negative effect on society. This comes in the form of addiction, how constant upward social comparison can lead to depression and burnout, or how in extreme situations, society's worst tendencies can be amplified into terrible results with Myanmar being the worst case scenario.

You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience. You can always put up a website, blog, write for an op-ed position at your local newspaper, hold a sign in a public square, etc. The mere existence of a centralized system with moderation is not a threat to freedom of speech.

Federation is a little bit more resilient but accounts can be blacklisted, and whole nodes can be blacklisted because of the behavior of a handful of accounts. And unfortunately, that little bit of resilience amplifies the problem of spam and bots, which for the average user is much bigger of a concern than losing their account. Not to mention privacy concerns, which is self-evident why an open system is more difficult than a closed one.

I'll concede that "worse" was poor wording, but intractable certainly wasn't. These problems become much more difficult to solve in a federated system.

However, most advocates of federation aren't interested in solving the same problems as I am, so that's where the dissonance comes from.


> You assume centralization means total monopolization, which neither Twitter or Facebook or Reddit or anyone has been able to do. You may lose access to a specific audience, but nobody has a right to an audience.

When almost everyone has access to something and you are singled out and denied that access (without due process), then there's a problem, discussions on the definition of monopoly notwithstanding.

You can try to fix that by ensuring the process is fair and transparent, or by changing the market so that there is no single entity whose services almost everyone uses.


Sure but I'd argue there's a much greater chance of a subscription service having a fair and transparent process because you are the customer versus an ad-supported service because the advertisers are the customer.

And maybe users have a right to not be deleted without cause, despite it being a private platform. Maybe scale means that they have to play by different rules.

But what if the answer is reducing reach so only explicit followers can see what's posted? Do users have a right to being algorithmically boosted? Do they have a right to a wide audience? People who have had their reach reduced on instagram or twitter don't seem content to accept that but I don't see an argument against it.

In a federated system, spam and bots are a huge problem. One way this is handled is a shared blocklist. Something I toyed with was a propagated list like DNS to handle this problem, which would go a long way, but would also mean that being blocked by a highly trusted node could mean being blacklisted by the fediverse. This has already happened in a soft way when Gab was mass defederated. As the fediverse grows, automated tooling is necessary. Even if people have a right to contest being blocked, what's the reasonable mechanism for getting unblocked in a massive federated system?


I can see it from the point of view of e.g. a politician, where reduction in reach has a direct impact on the number of votes they can expect. Disadvantaging one is as good as giving advantage to the rest, and in the context of politics that would be problematic even if a court ordered it.

That's not to say Fediverse-style moderation would solve this. I don't really know what the solution is for algorithmic feeds. Personally I'd rather go back to lightly-federated or unfederated forums, but that idea seems sadly unpopular.


Right and I'm not a fan of algorithmic feeds at all. Social media users broadly are happiest with a basic chronological feed composed only of who they follow. That's why every social media platform starts with that, then adds algorithmic feeds when they want to attract advertisers and after they feel their users are "locked in" enough.

Especially when engagement is the primary metric, which incentivizes our worst attention-seeking behavior. Well thought out, nuanced posts get lost in the ether. Hot takes, trolling and extreme positions get pushed to the top.

Reddit and HN mitigate this somewhat with the downvote system, which is hardly perfect, but at least means negative feedback is not given a positive weight in rankings.


> Ultimately, big problems with an open social web include:

These two seem like the same problem:

> moderation

> spam

You need some way of distinguishing high quality from low quality posts. But we kind of already have that. Make likes public (what else are they even for?). Then show people posts from the people they follow or that the people they follow liked. Have a dislike button so that if you follow someone but always dislike the things they like, your client learns you don't want to see the things they like.

Now you don't see trash unless you follow people who like trash, and then whose fault is that?

> which now includes scrapers bringing your site to a crawl

This is a completely independent problem from spam. It's also something decentralized networks are actually good at. If more devices are requesting some data then there are more sources of it. Let the bots get the data from each other. Track share ratios so high traffic nodes with bad ratios get banned for leeching and it's cheaper for them to get a cloud node somewhere with cheap bandwidth and actually upload than to buy residential proxies to fight bans.

> good faith verification

> posting transparency

It's not clear what these are but they sound like kind of the same thing again and in particular they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see instead of a bunch of spam from anons that nobody they follow likes.


> [...] show people posts from the people they follow or that the people they follow liked.

Yes, this is a good system. It'll work particularly well at filtering spam because people largely agree what it is. One thing that will happen with your system is people will separate into cliques. But that's not the end of the world. Has anyone implemented Anthony's idea of using followees' likes to rank posts?


> Then show people posts from the people they follow or that the people they follow liked

Yeah, that's how you create echo-chambers where people truly believe vaccines cause autism, immigrants commit the most crimes, etc.

When these people are inevitably showing at ballot, other people will call for more moderation. So no, this doesn't solve "moderation."


But is being exposed to broader viewpoints or even being shown "correct" posts more often something that users of social media actually want?

Social media is often seen as a public opinion manipulation machine, but I hope building another one of those is not the goal.


>You need some way of distinguishing high quality from low quality posts.

Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.

The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.

>This is a completely independent problem from spam.

Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.

>It's not clear what these are but they sound like kind of the same thing again

I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)

- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.

It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.

- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.

>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see

Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.

That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.

We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.


> Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.

This is effectively the same problem. The feed has a billion posts in it so if you're choosing from even the top half in terms of quality, the bottom decile is nowhere to be seen.

> The latter poisons a site before it can really start to curate to its users.

That's assuming you start off with a fire hose. Suppose you only see someone's posts in your feed if you a) visit their profile or b) someone you follow posted or liked it.

> ensure you know when a human or a bot is posting.

This is not possible and you should not attempt to do things that are known not to be possible.

It doesn't matter what kind of verification you do. Humans can verify an account and then hand it to a bot to post things. Also, alts are good; people should be able to have an account for posting about computers and a different account for posting about cooking or travel or politics.

What you're looking for is a way to rate limit account creation. But on day one you don't need that because your biggest problem is getting more users and by the time it's a problem you have a network effect and can just make them pay a pittance worth of cryptocurrency as a one-time fee if it's still a thing you want to do.

> It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google.

This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does it. If the user wants to know if someone really works for Google they contact the company and ask them, and if the company says no then you tell everybody that and anyone who doesn't believe you can contact the company themselves.

> It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing.

If someone does something illegal then you have the government arrest them. If it isn't illegal then it isn't to be censored. There is nothing for a social media thing to be involved in here and the previous attempts to do it were in error.

> It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.

To the extent that social media does such a thing, it does it exactly as above, i.e. as Reddit communities investigate things. If you want a professional organization dedicated to such things as an occupation, the thing you're trying to do is called investigative reporting, not social media.

> I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform.

No, they're much worse in an ad-driven platform, because then you're trying to maximize the amount of time people spend on the site and showing people rage bait and provocative trolling is an effective way to do that.

What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.

> We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.

We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.


>This is effectively the same problem.

Maybe when you get to the scale of Reddit it becomes the same problem. But a fledgling community is more likely to be dealing with dozens of real posts and hundreds of posts of spam. Even then, the solutions differ from the problem spaces, so I'm not so certain.

You can't easily automate a search for "quality", so most popular platforms focus on a mix of engagement and similarities to create a faux quality rating. Spam filtering and removal can be fairly automatic and accurate, as long as there's ways to appeal false negatives (though these days, they may not even care about that).

>This [ ensure you know when a human or a bot is posting.] is not possible and you should not attempt to do things that are known not to be possible.

Like all engineering, I'm not expecting perfection. I'm expecting a good effort at it. Is there anything stopping me from hooking an LLM to my HN account and have it reply to all my comments? No. But I'm sure if I took a naive approach to it that moderation would take note and take action on this account.

my proposal is two fold:

1. have dedicated account types for authorized bots to identify tools and other supportive functions that a community may want performed. They can even have different privileges like being unable to be voted on (or to vote).

2. action taken on very blatant attempts to bot a human account (The threshold being even more blatant than my above example). If account creation isn't free nor easy, a simple suspension or ban can be all that's neede d to cub such behavior.

There will still be abuse, but the kinds of abuse that have caused major controversies over the years are not exactly subtle masterminds. There was simply no incentive to take action once people reported them.

>This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does

Probably not. That kind of verification is more domain specific and that's an extreme example. Something trying to be Blind and focus on industry professionals might want to do verification, but probably not some casual tech forum.

It was ultimately an example of what transparency suggests here and how it differs from verification. This is another "good enough" example where I'm not expecting every post to be fact checked. We just simply shouldn't allow blatantly false users or content to go about unmoderated.

>What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.

Yes. This is why I don't expect such a solution to be solved by corporations. Outside of the brief flirting with Meta, it's not like any of the biggest players in the game have shown much interest in any of the topics talked about here nor in the article.

But the tools and people needed to make such an initiative doesn't need millions in startup funding. I'm not even certain such a community can be scalable, financially speaking. But communities aren't necessarily formed, ran, and maintained for purely financial reasons. Sometimes you just want to open your own bar and enjoy the people that come in; only caring about enough funds to keep the business running, not attempting to franchise it through the country.

>We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.

If you want a platform that doesn't remove anything except the outright illegal, I don't think we can really beat 4chan. Nor is anyone trying to beat 4chan (maybe Voat still is, but I haven't look there in years). I think it has that sector of community on lock.

But that aside: any modern community needs to be very opinionated on what it allows and doesn't upfront, in my eyes. Do you want to allow adult content and accept that over half your community's content will be porn? Do you want to take a hard line between adult and non-adult sub-communities? Do you want to minimize flame wars or not tend to comments at all (that aren't breaking the site)? Should sub-communities even be a thing or should all topics of all styles be thrown into a central feed and users get to opt in/out of certain tags? Is it fine for comments to mix in non-sequiturs in certain topics (e.g. Politics in an otherwise non-political post?). These all need to be addressed not necessarily on day one, but well before critical mass is achieved. See Onlyfans as a modern example of that result.

It's not about capital-C "Censorship" when it comes to being opinionated. It's about establishing norms upfront and fostering around those opinions. Those opinions should be shown upfront before a user makes an account so that they know what to expect, or if they shouldn't bother with this community.


A lot of tech folks hate government ID schemes, but I think MDL with some sort of pairwise pseudonyms could help with spam and verification.

It would let you identify users uniquely, but without revealing too much sensitive information. It would let you verify things like "This user has a Michigan driver's license, and they have an ID 1234, which is unique to my system and not linkable to any other place they use that ID."

If you ban that user, they wouldn't be able to use that ID again with you.

The alternative is that we continue to let unelected private operators like Cloudflare "solve" this problem.


Telegram added a feature where if someone cold dms you, it shows their phone number country and account age. When I see a 2 month old account with a Nigeria phone number I know it's a bot and I can ignore it.

The EU’s eIDAS 2.0 specification for their digital wallet identity explicitly supports the use of pseudonyms for this exact purpose of “Anonymous authentication”.

I tend to agree that linking content to real people is a key mechanism that will help fight spam, hidden ads and influencing campaigns.

However, if everything is linked to your ID - those that control that will be able to cancel you all at once. A bit like the consequences of being de-banked in an cash-less society.

So if you bring in laws that require digital ID"s they have to be implemented with a very robust set of safeguards in my view.


That's awesome. Hopefully the US can get something similar.

> - spam, which now includes scrapers bringing your site to a crawl

What do you mean with "now"? If you've ever been in a competitive industry before, you're already used to the random DDoS, and if you've published a moderately successful website, you've dealt with misbehaving scrapers/user-agents too, like the ones that get stuck and get requesting random 100 URLs per second for weeks.

I'm guessing you're alluding to AI scrapers, but are they really that different from what we've already learnt to deal with on the public internet?


Those are important reasons, but there are other reasons as well, such as concentration of market power in a few companies, which allows those companies to erect barriers to entry and shape law in ways that benefit themselves, as well as simply creating network effects that make it hard for new social-web projects to establish a foothold.

That's an even harder problem to solve. I do agree we should make sure that policy isn't manipulated by vested powers and make things even harder to compete with.

But network effects seems to be a natural phenomenon of people wanting to establish a familiar routine. I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies. But it still creates a strong network effect nonetheless.

I think the main solace here is that you don't need to be dominant to create a good community. You need to focus instead on getting above a certain critical mass, where you keep a healthy stream of posting and participation that can sustain itself. Social media should ultimately be about establishing a space for a community to flourish, and small communities are just as valid.


> I look at Steam as an example here, where while it has its own shady schemes behind the scenes (which I hope are addressed), it otherwise doesn't engage in the same dark patterns as other monopolies.

For now. Google's motto was "don't be evil" until it wasn't.

> I think the main solace here is that you don't need to be dominant to create a good community.

I'm not so sure. I mean, yes, you can create a "good" community on a small scale. But when the system is geared towards larger entities, those communities are always at risk because they don't have a seat at the table, so to speak.

An example that's relevant recently is these laws about age verification. For small communities, things like age verification requirements can be an existential threat. One crazy person with a vendetta can sue them into oblivion for some technical violation, and unlike the big players, they don't have the resources to fight back. They also don't have the resources to lobby for verification mechanisms that are realistic at a small scale.

When the rules of the game are set by the big players, small players are always at risk of being declared in violation, or just having the rules changed in a way that ensures they can't win. I'm coming to think that small communities are not safe unless large networks are either destroyed or severely restricted in a way that involves ongoing monitoring and enforcement. (I hesitate to use the word "community" for these large networks because I don't think something like Facebook is really a community, although there are communities within it.)


It is interesting how it became a norm to just blindly assume the more decentralized something is the better it is. There isn’t any evidence this is true. Reality isn’t so reducible.

I haven’t really seen anyone make that assumption. I don’t think the article blindly assumes anything, they provided some pretty concrete examples of why a decentralized platform may solve some of the issues with centralized social media.

A small cost to enter just means the capital still controls the narrative. As long as we can't even stop bullying physically in schools, we will not be able to have a civil social media. Start in the kindergarten and fix the problems with next generation or it will just get worse.

"- moderation

- spam, which now includes scrapers bringing your site to a crawl

- good faith verification

- posting transparency"

And we have to think about how to hit these targets while:

- respecting individual sovereignty

- respecting privacy

- meeting any other obligations or responsibilities within reason

and of course, it must be EASY and dead simple to use.

It's doable, we've done far more impossible-seeming things just in the last 30 years, so it's just a matter of willpower now.


Why are none of these a problem with Mastodon then? Some instabces do charge but most don't.

#1 problem is server hosting

Charging money I suspect feels like more of a solution to people who would otherwise prefer something be free, then an actual solution though.

The one principle benefit it has is it provides a resource stream to fund other things - but that's kind of it, because the other assumptions essentially assume there are no monetary thresholds in the system - i.e that spammers and botnets rely on volume and thus can be priced out of the market.

And to an extent it's true - but there's a threshold problem. How many accounts do you need to pay for in order to seize narrative control of a space, or to effect a takeover by seizing positions of power like moderation positions?

People I think like to think "hundreds or thousands" - but at least temporally this is well within reach of motivated threat actors (i.e. consider bot farms which are literally just racks of stripped down actual smart phones being puppeted - that is not a trivial investment), but the other side of it is - it's probably closer to like, 10 relatively active personalities.


It'd be cool if you had to pay a certain amount of money to publish any message.

And then if you could verify you'd paid it in a completely P2P decentralized fashion.

I'm not a crypto fan, but I'd appreciate a message graph where high signal messages "burned" or "donated money" to be flagged for attention.

I'd also like it if my attention were paid for by those wishing to have it, but that's a separate problem.


it's pure waste-generation, but hashcash is a fairly old strategy for this, and it's one of the foundations of Bitcoin. there's no "proof of payment to any beneficial recipient", sadly, but it does throttle high-volume spammers pretty effectively.

Maybe if you could prove you sent a payment to a charity node and then signed your message in the receipt for verification...

Imagine a world where every City Hall has a vending machine you can use to donate a couple bucks to to a charity of your choice, and receive an anonymous one-time use "some real human physically present donated real money to make this" token.

You could then spend the token with a forum, to gain and basic trust for an otherwise anonymous account.


I like that idea a lot.

The people willing to pay money to post messages are not a desirable demographic. It is one that includes people like spammers.

>We always wanted Elgg to federate. That was the plan from day one. It was an obvious need. There was no ActivityPub yet.

Elgg has ActivityPub plugin now https://elgg.org/plugins/3330966


To check out other FediForum keynotes, many demos showing off innovative open social web software, and notes from the FediForum unconference sessions, go to https://fediforum.org (disclaimer: FediForum co-organizer here)

Interesting discussion and article. My work is mostly in the decentralized model of thinking. Many of the arguments assume that some degree of control is necessary at the scale we deal with. The difficulty of true decentralization reminds me of what the founders of our republic wrestled with: avoiding both the over-control of old hierarchies and governing structures (centralized power) and the chaos of anarchy (decentralized power). Their answer was a framework that limited power but preserved individual freedom. Perfection? no. But it is a very interesting experiment to be a part of.

We seem to be facing a similar balance today, only in digital context and on a global scale. Human nature being what it is, no system will ever be perfect. What we can do is build safeguards (structures that empower users and raise the cost of abuse) so that bad actors must find diminishing returns for bad behavior.


The open social web's decentralization is just as dependent on relevant protocols and communities as it is on the hosting services on which they depend.

It's way easier to censor a decentralized social network if the majority of its nodes run on AWS, GCP and Azure, for instance.

What'd be great is if we could run these networks primarily from our personal devices (i.e. true edge computing), but the more the computing's pushed to the edge the harder it becomes to implement technically and socially.


nostr can do this. relays are lightweight enough to run on Android devices, Citrine ships one with a nice UI even. It's not p2p or anything but it works well enough to preserve your own note history and there a plans to extend its functionality beyond that.

https://github.com/greenart7c3/Citrine


Neat!

For this to be long-term sustainable though, it needs to be implemented in such a way that non-tech-savvy folks can also participate very easily, without needing to learn anything about P2P, relays, decentralized or edge computing, etc.


Spritely is the solution. Been baking for a few years now. Just pushed an update last week, in fact: https://spritely.institute/

I've never really got social media in any of its forms. I use messaging apps to stay in contact with people I like, but that's about it.

I skimmed this article, I still don't get it. I think group chats cover most of what the author is taking about, public and private ones. But this might be my lack of imagination. I feel there article, and by extension, the talk could have been a lot shorter.


> skimmed this article, I still don't get it.

But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.

Most of the article here is focused more on making sure any social media (be it chats, a public forum, or email) isn't hijacked by vested powers who want to spread propaganda or drown the user in ads. One approach to that focused in this article is decentralization, which gives a user the ability to take their ball and go home.

Of course, it's futile if the user doesn't care about wielding that power.


> But you're posting here, in socisl media, no? So you sought out something here that a group chat wouldn't give.

This is true, of course. I'm here interacting with strangers. But, for me, HN is about discovery not community like what the article talks about. I'd be just as content not posting if the ability wasn't there. I just don't agree that social media is that important.

I personally think what the article talks about is already available in the form of group chats on platforms like signal. My impression, from the article, is the author is extremely politically motivated and seems to believe social media is somehow a good thing, as long as the people they don't like can't control it , and likely can't use it? That last point might not be true.


I think the author is more saying that no political party (whether or not they like them) should be able to control it. I don't see anywhere in the article that would suggest they don't want certain people to use it. Just that they don't want people in positions of political power to be able to spy on users of social media and/or take their data at their will.

Group chats are where real people socialise with their actual friends now. Social media is where people consume infinite slop feeds for entertainment. The days of people posting their weekend on Facebook are long gone.

> The days of people posting their weekend on Facebook are long gone.

All of my friends do this on instagram or snap.


> consume infinite slop feeds for entertainment

I wish this was true. Far too many people mistake the content of social media for some kind of truth.


Group chats are lowercase S social media but they still benefit from being open.

By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?

> By open do you mean not centralised? I don't get the significance of big S social media. Functionally how would big S improve on group chats?

Social media has two functions: chat (within groups/topics/...) and discovery (of groups/topics/...). So unless we rely only on IRL discovery, we need a way to do discovery online.


Discovery is probably the main problem social media creates. Almost all of these problems solve themselves when you remove discovery. If someone in your friends group chat is spamming porn you just remove them. There's no need for the platform to intervene here, small groups of people can moderate their own friend groups.

Once you start algorithmically shoving content to people you have to start worrying about spam, trolling, politics, copyright, and all kinds of issues. The best discovery system is friends sharing chat invite links to other friends who are interested.


Yeah this is pretty much my sentiment. I want to discover I teresting stuff, main reason I'm on HN. But big S social media is a cancer on attention as far as I'm concerned, it serve no benefit to society.

The only times on the internet I've felt part of a community was on old web forums.

I used some of those similar web type services for discovery in the past but they became shit fast or shut down. HN is the nearest I can find that surfaces stuff I'm interested in. Social media might have stuff on it but I'm unwilling to waste my time trying to find it


ok, but what if my friends have terrible taste.

Go to events more your taste and find new people to invite you to things.

"The 19th reports at the intersection of gender, politics, and policy - a much-needed inclusive newsroom..." This isn't a problem with the distribution technology. This is a problem with the message, and its narrow niche.

The site's marketing is geared towards collecting donations in the US$20,000 and up range. That doesn't scale. They don't have viewer counts big enough to make it on payments in the $10/year range. So that doesn't scale either.

The back-end technology of this thing has zero bearing on those problems.

[1] https://19thnews.org/sponsorship/


I believe that the more populist layer of the www became social media apps. Hosted LLMs (claude, chatGPT etc) are going to become the popular source of information and therefore narrative. What we must remember is that we should retain control of our thoughts, and be aware of how we can share them without financially interested parties claiming rights to their use or abuse. I am trying to solve some of these problems with NoteSub App - https://apps.apple.com/gb/app/notesub/id6742334239 - but have yet to overcome the real issue of how we can stop the middleman keeping the loop closed with him in between.

I'd like to add that, the need for an open phone OS matters now more than ever before.

Social media is simply an extension from cybernetics to the principles of cog-sci as a "protocol" network where status and control are the primary forces mediated. This is irrefutable - the web was built as an extension of the cog-sci parameters of information as control.

Social media can't be saved, it can only be revolutionary as a development arena for a new form of language.

"The subject of integration was socialization; the subject of coordination was communication. Both were part of the theme of control...Cybernetics dispensed with the need for biological organisms, it as the parent to cognitive science, where the social is theorized strictly in terms of the exchange of information. Receivers, senses of signs need to be known in terms of channels, capacities, error rates, frequencies and so forth." Haraway Primate Visions.

I don't understand how technologists and coders can be this naive to the ramifications of electronically externalizing signals which start as arbitrary in person, and then clearly spiral out of control once accelerated and cut-off from the initial conditions.


This really reads to me like an example of pseudo-profound bullshit, and yet I'm sure you do mean something - could you explain what?

The technology of language is designed to fool the receiver. That's its primary goal. Read any substantial text on language post western functional linguistics like Deacon's The Symbolic Species. In his view "language is a virus or a parasite"

Once language became a strategy of cybernetic and then cog-sci regimes (which is what all computer science is modeled from), the basic principle of control-deception in language became electronic through its perceptions of socialization, which comp sci totally mistakes as information (see above) and then control, accelerated and now automated. The entire point of computer science operating socialization is completely off the rails mindblowingly simple-minded and damaging to us. Algorithms a/b tested to succeed are in essence, suicidal to the survival of our species. We're not optimized to horizontalize communication of this type: arbitrary metaphors and symbols. Language wasn't built for speed, horizontalization or decentralization.

Now read the above again. To call Donna Haraway the great theorist/historian of cyborg studies and the development of science into cog science pseudo in any way reveals you have never grasped anything deep and resonant about human computer interaction.


I'm afraid I'm not convinced. In particular, there's an obvious objection to your first claim: if language were primarily designed to fool people, then it would be useless, because other people would ignore it. As for the rest, it still isn't clear what you are saying. For example: "the basic principle of control-deception in language became electronic through its perceptions of socialization". Sorry, whose perceptions? The principle's perceptions? Language's perceptions?

It seems you can't explain your ideas clearly. Maybe they just aren't clear ideas.


You're wasting your time wasting my time if you pretend you can't find the it in that sentence, one my 14 year old freshman son identified in 2 seconds. That means you're either a moron, or you play very stupid games.

That language is deception primarily can be factual along with 99% of its users remaining unable to detect that deception, and it's not even fully contradictory. What kind of scientist can't hold near contradictory processes in their working memory to reach correlational theoretic statements, certainly none that I know.

If you don't know the animal world of signals heavily discounts arbitrary forms from roles in survival, I don't know what to tell you. Go back to undergrad and start all over again.

The amount of work about language being too indirect to be valid, stable signals and thus deceptive is rather vast, and you pretending it will vanish with that little narrative shuffle "peoplw will ignore it" means you must be either doubly moronic, have no idea about the human capacity for self-deception in signals, mythological thought, or spend your days playing defensive games in debates you just can't win.

I count over 300 papers discussing the deceptive nature of language, beginning with Aristotle.

..at some point a direct contact must occur between knowledge and reality. If we succeed in freeing ourselves from all these interpretations – if we above all succeed in removing the veil of words, which conceals the truth, the true essence of things, then at one stroke we shall find ourselves face to face with the original perceptions.. Ernst Cassirer The Philosophy of Symbolic Forms


> What specific pain point are you solving that keeps people on WhatsApp despite the surveillance risk, or on X despite the white supremacy?

Why wouldn't a genuinely open social web allow people to communicate content that Ben Werdmuller thinks constitutes white supremacy, just as one can on X? Ideas and opinions that Ben Werdmuller (and people with similar activist politics to him) think constitute white supremacy are very popular among huge segments of the English-speaking public, and if it's even possible for some moderator with politics like Werdmuller to prevent these messages from being promulgated (as was the case at Twitter until Musk bought it in 2022 and fired all the Trust and Safety people with politics similar to Werdmuller's), then it is not meaningfully open. If this is not possible, then would people with Werdmuller's politics still want to use an open social web, rather than a closed social web that lets moderators attempt to suppress content they deem white supremacist?

> As I was writing this talk, an entire apartment building in Chicago was raided. Adults were separated into trucks based on race, regardless of their citizenship status. Children were zip tied to each other.

> And we are at the foothills of this. Every week, it ratchets up. Every week, there’s something new. Every week, there’s a new restrictive social media policy or a news outlet disappears, removing our ability to accurately learn about what’s happening around us.

The reaction to the raid of that apartment building in Chicago on many social media platforms was the specific meme-phrase "this is what I voted for", and indeed Donald Trump openly ran on doing this, and won the US presidential election. What prevents someone from using open social media tech to call for going harder on deportations, or to spread news stories about violent crimes and fraud committed by immigrants? If anything can prevent this, how can the platform be said to be actually open?


--- We all know about Twitter acquirer Elon Musk, who bent the platform to fit his political worldview. But he’s not alone.

Here’s Microsoft CEO Satya Nadella, owner of LinkedIn, who contributed a million dollars to Trump’s inauguration fund.

Here’s Mark Zuckerberg, who owns Threads, Facebook, Instagram, and WhatsApp, who said that he feels optimistic about the new administration’s agenda.

And here’s Larry Ellison, who will control TikTok in the US, who was a major investor in Trump, and who one advisor called, in a WIRED interview, the shadow President of the United States.

Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian. ---

This was the real why. When control amasses to the few we end up in a place where there is a dissonance between what we perceive to be true and what is actually true. The voice of the dictator will say one thing but the people's lived experience will say something else. I don't think mastodon or Bluesky or even Jack Dorsey's new project Bitchat solves any of this. It goes much deeper. It is ideological. It is values driven. The outcome is ultimately decided by the motives of the people who start it or run it. I just don't think any western driven values can be the basis of a new platform because a large majority of the world are not from the west. For better or worse, you have the platforms of the west. They are US centric and they will dominate. Anything grassroots and fundamentally opposed to that will not come from the west. It must come authentically from those who need it.


80% of the time social media is just increasingly bad slop created to generate clicks/views/engagement. 10% is people screaming into the void, the last 10% might be valuable content. The Internet was better when we all hanged around in big and small web forums and group chats. To this day the most interesting conversations I see and participate in happens in forums and on group chats.

I notice that on Discord, even in "servers" (which aren't servers) that are allegedly about technical topics, it seems like at least 3 out of every 4 messages are slop - low effort irrelevant memes etc. For example there's a cat gif labeled "repost this cat after a substantial delay" and people just post that for no reason, then other people reply with the same gif. And there's no algorithm in Discord - it's an IRC-style chatroom platform - it's real humans posting and engaging with slop because it triggers dopamine or something, somehow.

While I tend to support there being open social alternatives, I haven’t really seen the people behind them talk about the most important aspect: how will you attract and retain users? There has to be more to the value proposition than “it’s open”. The vast majority of users simply do not care about this. They want to be where their friends, family, and favorite content creators are. They want innovation in both content and format. Until the well intentioned people behind these various open web platforms and non-platforms internalize and act on these realities, the whole enterprise is doomed to be a niche movement that will eventually go out with a whimper.

I believe the corporation for public broadcasting should provide funding for local member stations who run their own nodes on fediverse sites, and then federate those nodes together

> Social media is very quickly becoming aligned with a state that in itself is becoming increasingly authoritarian.

Did this guy complain back when pre-Musk Twitter was fully in bed with the state? Or was he okay with it because that authoritarian relationship was on the right side of history?


1) periodjet guy seems to think "social media" is only twitter. 2) he also has no understanding of what "authoritarian" means. it's not just a word to be dismissive of things one doesn't like.

cue links to the nothing burger that is "Twitter Files" lol


Whatever happened to Diaspora?

The name killed it. If you know what it means, it doesn't bear any relevance to social media. If you don't know what it means, it sounds like a gastric disorder.

So does Mastodon

Why was this chosen to be a keynote? This talk seems to not care about open social media, but rather that existing social media sites don't follow the author's political agenda. Having a keynote trying to rally people into building sites that support a niche political agenda that the general public doesn't agree with doesn't accomplish the goals of making open social media more viable. This along with equating things with "Nazis" just further alienates people.

I read this comment, went back to the article, and then came back to this comment. I have no idea what niche political agenda you're talking about- the message of the article is basically "solve problems your users are actually facing, not problems you think they have".

You can apply the concepts the author talks about to _literally_ any group that would make use of social media.


>solve problems your users are actually facing, not problems you think they have

>You can apply the concepts the author talks about to _literally_ any group

The presentation could have been modified to avoid alienating people if the author had focused on champinioning how open social media allows for the ability to solve these problems.

>I have no idea what niche political agenda you're talking about

Search the page for "Why should anyone care?", and you'll see it. Thos section of the talk he complains that the political situation of America doesn't match his views. Then in the next section, "The capitulation of social media", he complains about how other social media sites don't match his politics. Then in the next section, "The decline of journalism", the talker tries to argue his political opinion that journalists are a good thing. Then in the next section, "The problem is global", he explains that more places than just America don't share his political view.

I'll stop here, but it goes on further, even to the very last sentence. I thought this was supposed to be a technology keynote, but this talker turned it into a place for him to complain about the political situation of the world.


Being against Nazis is not a niche political agenda.

The definition of “nazi” will make it niche

Social media relies on our dead. arbitrary signaling system, language, which once it's accelerated becomes a cybernetic/cog-sci control network, no matter how it's operated. Language is about control, status and bias before it's an attempt to communicate information. It's doomed as an external system in arbitrary symbols.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: