en.osm.town is one of the many independent Mastodon servers you can use to participate in the fediverse.
An independent, community of OpenStreetMap people on the Fediverse/Mastodon. Funding graciously provided by the OpenStreetMap Foundation.

Server stats:

258
active users

#theorem

0 posts0 participants0 posts today

Content moderation is, inherently, a subjective practice.

Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible.

By definition, content moderation is always going to rely on judgment calls,
and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly.

Indeed, one of the problems of content moderation that we’ve highlighted over the years is that to make good decisions you often need a tremendous amount of #context,
and there’s simply no way to adequately provide that at scale in a manner that actually works.

That is, when doing content moderation at scale, you need to set rules,
but rules leave little to no room for understanding context and applying it appropriately.

And thus, you get lots of crazy edge cases that end up looking bad.

We’ve seen this directly.

Last year, when we turned an entire conference of “content moderation” specialists into content moderators for an hour,
we found that there were exactly zero cases where we could get all attendees to agree on what should be done in any of the eight cases we presented.

Further, people truly underestimate the impact that “#scale” has on this equation.

Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day,
but large platforms are dealing with way more than that.

If you assume that there are 1 million decisions made every day,
even with 99.9% “accuracy”
(and, remember, there’s no such thing, given the points above),
you’re still going to “miss” 1,000 calls.

But 1 million is nothing.
On Facebook alone a recent report noted that there are 350 million photos uploaded every single day.

And that’s just photos.
If there’s a 99.9% accuracy rate,
it’s still going to make “mistakes” on 350,000 images.
Every. Single. Day.

So, add another 350,000 mistakes the next day. And the next. And the next. And so on.

And, even if you could achieve such high “accuracy” and with so many mistakes,
it wouldn’t be difficult for, say, a journalist to go searching and find a bunch of those mistakes
— and point them out.

This will often come attached to a line like
“well, if a reporter can find those bad calls, why can’t Facebook?”
which leaves out that Facebook DID find that other 99.9%.

Obviously, these numbers are just illustrative, but the point stands that when you’re doing content moderation at scale,
the scale part means that even if you’re very, very, very, very good, you will still make a ridiculous number of mistakes in absolute numbers every single day.

So while I’m all for exploring different approaches to content moderation,
and see no issue with people calling out failures when they (frequently) occur,
it’s important to recognize that there is no perfect solution to content moderation,
and any company, no matter how thoughtful and deliberate and careful is going to make mistakes.

Because that’s #Masnick’s #Impossibility #Theorem
— and unless you can disprove it, we’re going to assume it’s true
techdirt.com/2019/11/20/masnic

Techdirt · Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do WellAs some people know, I’ve spent a fair bit of time studying economist Kenneth Arrow whose work on endogenous growth theory and information economics influenced a lot of my thinking on the eco…

   
❛❛ Teen #Mathematicians Tie #Knots Through a #Mind-Blowing Fractal ❜❜

Three high schoolers and their mentor revisited a century-old #theorem to prove that all #knots can be found in a #fractal called the #MengerSponge.
Gregory Barber for #QuantaMagazine

🔗 QuantaMagazine.org/teen-mathem 2024 Nov 26 ce
🔗 Wikipedia.org/wiki/Menger_spon#MengerSponge
🔗 Wikipedia.org/wiki/Topology#Topology

Replied in thread

@BartoszMilewski Maybe we should stop asking ourselves what is #intelligence. Let's take a different approach. I think to define #stupidity would be much simpler. After all, how many people you know, you could tell that they are intelligent? It's easier to tell if a person is stupid, or they did something stupid. Also, in my average day, I meet much more stupid than intelligent people. Once we define stupidity, and maybe write down some #theorem or #algorithm to simulate stupidity, we will be able to train a system to avoid that kind of behaviour. We may finally be able to build an #AI which works better not because it's intelligent, but because it's non-stupid enough for our purposes.

#decompwlj ➡️ It's a decomposition of positive integers. The weight is the smallest such that in the Euclidean division of a number by its weight, the remainder is the jump (first difference, gap). The quotient will be the level. So to decompose a(n), we need a(n+1) with a(n+1)>a(n) (strictly increasing sequence), the decomposition is possible if a(n+1)<3/2×a(n) and we have the unique decomposition a(n) = weight × level + jump.

We see the fundamental theorem of arithmetic and the sieve of Eratosthenes in the decomposition into weight × level + jump of natural numbers. For natural numbers, the weight is the smallest prime factor of (n-1) and the level is the largest proper divisor of (n-1). Natural numbers classified by level are the (primes + 1) and natural numbers classified by weight are the (composites +1).

For prime numbers, this decomposition led to a new classification of primes. Primes classified by weight follow Legendre conjecture and i conjecture that primes classified by level rarefy. I think this conjecture is very important for the distribution of primes.

It's easy to see and prove that lesser of twin primes (>3) have a weight of 3. So the twin primes conjecture can be rewritten: there are infinitely many primes that have a weight of 3.

I am not mathematician so i decompose sequences to promote my vision of numbers. By doing these decompositions, i apply a kind of sieve on each sequences.

➡️ oeis.org/wiki/Decomposition_in