> Moderation focuses on addressing the low quality, while curation makes sure tye high quality posts receive focus.
This is effectively the same problem. The feed has a billion posts in it so if you're choosing from even the top half in terms of quality, the bottom decile is nowhere to be seen.
> The latter poisons a site before it can really start to curate to its users.
That's assuming you start off with a fire hose. Suppose you only see someone's posts in your feed if you a) visit their profile or b) someone you follow posted or liked it.
> ensure you know when a human or a bot is posting.
This is not possible and you should not attempt to do things that are known not to be possible.
It doesn't matter what kind of verification you do. Humans can verify an account and then hand it to a bot to post things. Also, alts are good; people should be able to have an account for posting about computers and a different account for posting about cooking or travel or politics.
What you're looking for is a way to rate limit account creation. But on day one you don't need that because your biggest problem is getting more users and by the time it's a problem you have a network effect and can just make them pay a pittance worth of cryptocurrency as a one-time fee if it's still a thing you want to do.
> It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google.
This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does it. If the user wants to know if someone really works for Google they contact the company and ask them, and if the company says no then you tell everybody that and anyone who doesn't believe you can contact the company themselves.
> It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing.
If someone does something illegal then you have the government arrest them. If it isn't illegal then it isn't to be censored. There is nothing for a social media thing to be involved in here and the previous attempts to do it were in error.
> It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
To the extent that social media does such a thing, it does it exactly as above, i.e. as Reddit communities investigate things. If you want a professional organization dedicated to such things as an occupation, the thing you're trying to do is called investigative reporting, not social media.
> I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform.
No, they're much worse in an ad-driven platform, because then you're trying to maximize the amount of time people spend on the site and showing people rage bait and provocative trolling is an effective way to do that.
What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
> We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.
Maybe when you get to the scale of Reddit it becomes the same problem. But a fledgling community is more likely to be dealing with dozens of real posts and hundreds of posts of spam. Even then, the solutions differ from the problem spaces, so I'm not so certain.
You can't easily automate a search for "quality", so most popular platforms focus on a mix of engagement and similarities to create a faux quality rating. Spam filtering and removal can be fairly automatic and accurate, as long as there's ways to appeal false negatives (though these days, they may not even care about that).
>This [ ensure you know when a human or a bot is posting.] is not possible and you should not attempt to do things that are known not to be possible.
Like all engineering, I'm not expecting perfection. I'm expecting a good effort at it. Is there anything stopping me from hooking an LLM to my HN account and have it reply to all my comments? No. But I'm sure if I took a naive approach to it that moderation would take note and take action on this account.
my proposal is two fold:
1. have dedicated account types for authorized bots to identify tools and other supportive functions that a community may want performed. They can even have different privileges like being unable to be voted on (or to vote).
2. action taken on very blatant attempts to bot a human account (The threshold being even more blatant than my above example). If account creation isn't free nor easy, a simple suspension or ban can be all that's neede d to cub such behavior.
There will still be abuse, but the kinds of abuse that have caused major controversies over the years are not exactly subtle masterminds. There was simply no incentive to take action once people reported them.
>This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does
Probably not. That kind of verification is more domain specific and that's an extreme example. Something trying to be Blind and focus on industry professionals might want to do verification, but probably not some casual tech forum.
It was ultimately an example of what transparency suggests here and how it differs from verification. This is another "good enough" example where I'm not expecting every post to be fact checked. We just simply shouldn't allow blatantly false users or content to go about unmoderated.
>What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
Yes. This is why I don't expect such a solution to be solved by corporations. Outside of the brief flirting with Meta, it's not like any of the biggest players in the game have shown much interest in any of the topics talked about here nor in the article.
But the tools and people needed to make such an initiative doesn't need millions in startup funding. I'm not even certain such a community can be scalable, financially speaking. But communities aren't necessarily formed, ran, and maintained for purely financial reasons. Sometimes you just want to open your own bar and enjoy the people that come in; only caring about enough funds to keep the business running, not attempting to franchise it through the country.
>We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.
If you want a platform that doesn't remove anything except the outright illegal, I don't think we can really beat 4chan. Nor is anyone trying to beat 4chan (maybe Voat still is, but I haven't look there in years). I think it has that sector of community on lock.
But that aside: any modern community needs to be very opinionated on what it allows and doesn't upfront, in my eyes. Do you want to allow adult content and accept that over half your community's content will be porn? Do you want to take a hard line between adult and non-adult sub-communities? Do you want to minimize flame wars or not tend to comments at all (that aren't breaking the site)? Should sub-communities even be a thing or should all topics of all styles be thrown into a central feed and users get to opt in/out of certain tags? Is it fine for comments to mix in non-sequiturs in certain topics (e.g. Politics in an otherwise non-political post?). These all need to be addressed not necessarily on day one, but well before critical mass is achieved. See Onlyfans as a modern example of that result.
It's not about capital-C "Censorship" when it comes to being opinionated. It's about establishing norms upfront and fostering around those opinions. Those opinions should be shown upfront before a user makes an account so that they know what to expect, or if they shouldn't bother with this community.
This is effectively the same problem. The feed has a billion posts in it so if you're choosing from even the top half in terms of quality, the bottom decile is nowhere to be seen.
> The latter poisons a site before it can really start to curate to its users.
That's assuming you start off with a fire hose. Suppose you only see someone's posts in your feed if you a) visit their profile or b) someone you follow posted or liked it.
> ensure you know when a human or a bot is posting.
This is not possible and you should not attempt to do things that are known not to be possible.
It doesn't matter what kind of verification you do. Humans can verify an account and then hand it to a bot to post things. Also, alts are good; people should be able to have an account for posting about computers and a different account for posting about cooking or travel or politics.
What you're looking for is a way to rate limit account creation. But on day one you don't need that because your biggest problem is getting more users and by the time it's a problem you have a network effect and can just make them pay a pittance worth of cryptocurrency as a one-time fee if it's still a thing you want to do.
> It can even extend in some domains to making sure e.g. That a person who says they worked at Google actually worked at Google.
This is not a problem that social networks need to solve, but if it was you would just do it the way anybody else does it. If the user wants to know if someone really works for Google they contact the company and ask them, and if the company says no then you tell everybody that and anyone who doesn't believe you can contact the company themselves.
> It may include addressing misinformation and hate, or removing non-transparent intimate advice like legal/medical claims without sources or proper licensing.
If someone does something illegal then you have the government arrest them. If it isn't illegal then it isn't to be censored. There is nothing for a social media thing to be involved in here and the previous attempts to do it were in error.
> It essentially boils down to making ensuring that "bad but popular" advice doesn't proliferate, as it it ought to do.
To the extent that social media does such a thing, it does it exactly as above, i.e. as Reddit communities investigate things. If you want a professional organization dedicated to such things as an occupation, the thing you're trying to do is called investigative reporting, not social media.
> I think we've seen enough examples of how dangerous "showing people what they actually want to see" can be if left unchecked. And the incentives to keep them up are equally dangerous in an ad-driven platform.
No, they're much worse in an ad-driven platform, because then you're trying to maximize the amount of time people spend on the site and showing people rage bait and provocative trolling is an effective way to do that.
What people want to see is like, a feed of fresh coupon codes that actually work, or good recipes for making your own food, or the result of the DIY project their buddy just finished. But showing you that doesn't make corporations the most money, so instead they show you somebody saying something political and provocative about vaccines because it gets people stirred up. Which is not actually what people want to see, which is why they're always complaining about it.
> We already give some authoritarianism by having moderators we trust to remove spam and illegal content, so I don't see it as a giant overstep to make sure they can also do this.
We should take away their ability to actually remove anything, censorship can GFTO, and instead give people a feed that they actually control and can therefore configure to not show that stuff because it is in reality not what they want to see.