Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>You need some way of distinguishing high quality from low quality posts.

Yes. But I see curation more as a 2nd order problems to solve once the bases are taken care of. Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.

The tools needed for curation, stuff like filtering, finding similar posts/comments, popularity, following, are different from those needed to moderate, or self moderate (ignore, down voting, reporting). The latter poisons a site before it can really start to curate to its users.

>This is a completely independent problem from spam.

Yeah, thinking more about it, it probably is a distinct category. It simply has a similar result of making a site unable to function.

>It's not clear what these are but they sound like kind of the same thing again

I can clarify. In short, posting transparency focused more on the user and good faith verification focuses more on the content. (I'm also horrible with naming, so I welcome better terms to describe these)

- Posting transparency at this point has one big goal: ensure you know when a human or a bot is posting. But it extends to ensuring there's no impersonation, that there's no abuse of alt accounts, and no voting manipulation.

It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google. But this is definitely a step that can overstep privacies.

- good faith verification refers more towards a duty to properly vet and fact check information that is posted. It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing. It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.

>they sound like elements in the authoritarian censorship toolbox which you don't actually need or want once you start showing people the posts they actually want to see

Yes, they are. I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform. Being able to address that naturally requires some more authorian approaches.

That's why "good faith" is an important factor here. Any authoritarian act you introduce can only work on trust, and is easily broken by abuse. If we want incentives to change from "maximizing engagement" to "maximizing quality and community", we need to cull out malicious information.

We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.





> Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.

This is effectively the same problem. The feed has a billion posts in it so if you're choosing from even the top half in terms of quality, the bottom decile is nowhere to be seen.

> The latter poisons a site before it can really start to curate to its users.

That's assuming you start off with a fire hose. Suppose you only see someone's posts in your feed if you a) visit their profile or b) someone you follow posted or liked it.

> ensure you know when a human or a bot is posting.

This is not possible and you should not attempt to do things that are known not to be possible.

It doesn't matter what kind of verification you do. Humans can verify an account and then hand it to a bot to post things. Also, alts are good; people should be able to have an account for posting about computers and a different account for posting about cooking or travel or politics.

What you're looking for is a way to rate limit account creation. But on day one you don't need that because your biggest problem is getting more users and by the time it's a problem you have a network effect and can just make them pay a pittance worth of cryptocurrency as a one-time fee if it's still a thing you want to do.

> It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google.

This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does it. If the user wants to know if someone really works for Google they contact the company and ask them, and if the company says no then you tell everybody that and anyone who doesn't believe you can contact the company themselves.

> It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing.

If someone does something illegal then you have the government arrest them. If it isn't illegal then it isn't to be censored. There is nothing for a social media thing to be involved in here and the previous attempts to do it were in error.

> It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.

To the extent that social media does such a thing, it does it exactly as above, i.e. as Reddit communities investigate things. If you want a professional organization dedicated to such things as an occupation, the thing you're trying to do is called investigative reporting, not social media.

> I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform.

No, they're much worse in an ad-driven platform, because then you're trying to maximize the amount of time people spend on the site and showing people rage bait and provocative trolling is an effective way to do that.

What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.

> We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.

We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.


>This is effectively the same problem.

Maybe when you get to the scale of Reddit it becomes the same problem. But a fledgling community is more likely to be dealing with dozens of real posts and hundreds of posts of spam. Even then, the solutions differ from the problem spaces, so I'm not so certain.

You can't easily automate a search for "quality", so most popular platforms focus on a mix of engagement and similarities to create a faux quality rating. Spam filtering and removal can be fairly automatic and accurate, as long as there's ways to appeal false negatives (though these days, they may not even care about that).

>This [ ensure you know when a human or a bot is posting.] is not possible and you should not attempt to do things that are known not to be possible.

Like all engineering, I'm not expecting perfection. I'm expecting a good effort at it. Is there anything stopping me from hooking an LLM to my HN account and have it reply to all my comments? No. But I'm sure if I took a naive approach to it that moderation would take note and take action on this account.

my proposal is two fold:

1. have dedicated account types for authorized bots to identify tools and other supportive functions that a community may want performed. They can even have different privileges like being unable to be voted on (or to vote).

2. action taken on very blatant attempts to bot a human account (The threshold being even more blatant than my above example). If account creation isn't free nor easy, a simple suspension or ban can be all that's neede d to cub such behavior.

There will still be abuse, but the kinds of abuse that have caused major controversies over the years are not exactly subtle masterminds. There was simply no incentive to take action once people reported them.

>This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does

Probably not. That kind of verification is more domain specific and that's an extreme example. Something trying to be Blind and focus on industry professionals might want to do verification, but probably not some casual tech forum.

It was ultimately an example of what transparency suggests here and how it differs from verification. This is another "good enough" example where I'm not expecting every post to be fact checked. We just simply shouldn't allow blatantly false users or content to go about unmoderated.

>What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.

Yes. This is why I don't expect such a solution to be solved by corporations. Outside of the brief flirting with Meta, it's not like any of the biggest players in the game have shown much interest in any of the topics talked about here nor in the article.

But the tools and people needed to make such an initiative doesn't need millions in startup funding. I'm not even certain such a community can be scalable, financially speaking. But communities aren't necessarily formed, ran, and maintained for purely financial reasons. Sometimes you just want to open your own bar and enjoy the people that come in; only caring about enough funds to keep the business running, not attempting to franchise it through the country.

>We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.

If you want a platform that doesn't remove anything except the outright illegal, I don't think we can really beat 4chan. Nor is anyone trying to beat 4chan (maybe Voat still is, but I haven't look there in years). I think it has that sector of community on lock.

But that aside: any modern community needs to be very opinionated on what it allows and doesn't upfront, in my eyes. Do you want to allow adult content and accept that over half your community's content will be porn? Do you want to take a hard line between adult and non-adult sub-communities? Do you want to minimize flame wars or not tend to comments at all (that aren't breaking the site)? Should sub-communities even be a thing or should all topics of all styles be thrown into a central feed and users get to opt in/out of certain tags? Is it fine for comments to mix in non-sequiturs in certain topics (e.g. Politics in an otherwise non-political post?). These all need to be addressed not necessarily on day one, but well before critical mass is achieved. See Onlyfans as a modern example of that result.

It's not about capital-C "Censorship" when it comes to being opinionated. It's about establishing norms upfront and fostering around those opinions. Those opinions should be shown upfront before a user makes an account so that they know what to expect, or if they shouldn't bother with this community.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: