en.osm.town is one of the many independent Mastodon servers you can use to participate in the fediverse.
An independent, community of OpenStreetMap people on the Fediverse/Mastodon. Funding graciously provided by the OpenStreetMap Foundation.

Server stats:

252
active users

#context

3 posts2 participants0 posts today

Sora News 24: Smash Bros. creator learns he can’t tweet carelessly, fans learn they can’t trust AI translations. “Anxious to know what the Japanese text of [Masahiro] Sakurai’s tweet, ほうほう, means, many took to using automatic online translation tools, which in many cases gave them a translation that raised as many questions as it answered when they spat back ‘method’ as the […]

https://rbfirehose.com/2025/03/29/sora-news-24-smash-bros-creator-learns-he-cant-tweet-carelessly-fans-learn-they-cant-trust-ai-translations/

On algorithmic complacency... and context collapse.

"I am seeing mounting evidence that an increasing number of people are so used to algorithmically-generated feeds that they no longer care to have a self-directed experience that they are in control of."

"... it feels like large swaths of people have forgotten to exercise their own agency."

by @TechConnectify

youtube.com/watch?v=QEJpZjg8GuA

And this here is not only an amazing find of a #hoard of Celtic and Roman coins near Utrecht in the Netherlands, but a really great example illustrating how much more of the whole story we can tell thanks to find #context:

theguardian.com/uk-news/2025/j via The Guardian

The Guardian · Ancient British coins found in Dutch field likely to be spoils of Roman conquestBy Daniel Boffey

#KI, #News, #LLM, #Context Window
Sprachmodell mit riesigem Context Window veröffentlicht: linux-magazin.de/news/minimax-

Linux-Magazin · MiniMax releast LLM mit riesigem Context-WindowMiniMax hat sein Sprachmodell MiniMax-Text-o1 veröffentlicht, dass mit dem größten derzeit verfügbaren Context-Windows von vier Millionen Token ausgestattet ist. Der singnapurische Hersteller MiniMax ist eher bekannt für sein generatives, hochauflösendes Video-Modell Hailuo, das mit OpenAI’s Sora oder Luma AI’s Dream konkurriert. Ein Context Window ist der Textbereich, den ein KI-Sprachmodell gleichzeitig verarbeiten und berücksichtigen kann. Es wird in Token gemessen, wobei ein Token etwa 4 Zeichen oder 3/4 eines Wortes entspricht. Vier Millionen Token kommen damit schon dem Umfang einer kleinen Bibliothek nahe. Das Context Window fungiert als „Kurzzeitgedächtnis“ des KI-Modells und ermöglicht es ihm, relevante Informationen aus vorherigen Interaktionen oder Eingaben zu berücksichtigen. Vor kurzem hatte Google sein Modell Gemini 1.5 Pro vorgestellt, das mit einem Context Window von zwei Millionen Token aufwartet, MiniMax verdoppelt diesen Wert nun. Auf X verlautbart das Unternehmen: „MiniMax-01 verarbeitet effizient bis zu 4 Millionen Token – das 20- bis 32-fache der Kapazität anderer führender Modelle. Wir glauben, dass MiniMax-01 den erwarteten Anstieg agentenbezogener Anwendungen im kommenden Jahr unterstützen wird, da Agenten zunehmend erweiterte Fähigkeiten zur Kontextverarbeitung und einen dauerhaften Speicher benötigen.“ Bei allem bietet das neue Modell auch ein ausgezeichnetes Preis-Leistungsverhältnis. Es sollen Kosten von nur 0,2 Dollar pro 1 Million Input Token und 1,1 Dollar pro 1 Million Output Tokens entstehen. Zum Vergleich OpenAI’s GPT-4o kostet 2,50 Dollar pro 1 Million Input Tokens und ist damit 12,5mal teurer.

"I believe that people can change their minds. We need to create a more #forgiving #society in which it’s acceptable for people to alter their opinions, because judging them is really antithetical to growth. I’m committed to #education and sharing #ideas; if you grant people a proper understanding of the #conditions and #context in which they live, they are more likely to understand things that would otherwise be blurred or invisible."

medium.com/perlego/surveillanc

.

Perlego · Surveillance and Security: How Islamophobia Benefits The StateBy Lucy Hoyle
Replied in thread

@rzeta0 @dmm

I think it's accurate to say that 'must' is too constraining, and that we usually have a one side or the other view of measurements (/interactions).

So either we'll find time dilation or length contraction (or even a blend of both) depending on the logistics/topology of the systems involved, and our choice (which can be from just 1 alternative) of framing the perspective being used. #context

Edit: The way you state part two is fine in most cases. I would say it inside out: the laws are consistent because the measured SOL, from any observer & frame, is consistent.

Being able to write laws is the result of consistent behavior/reactions to consistent/measured change (holding other things constant, as a reductive basis).

🤔🤔 2024 has been my best year in terms of number of #citations to my #research articles, with over 1000 citations this year alone.

BUT, more than an increased appreciation of my work, I think this signals a shift in the way research is published. I'm looking at you #arxiv. 👀👀

Not complaining. More #open #publication and #openscience is never bad. I'm just explaining this as #young #researchers should understand #context is always key when evaluating a researchers #CV and that citation count is always a partial perspective.

Chris Corrigan @chriscorrigan on #participation and #DecisionMaking:

»The exercise of engagement is often window dressing. It can result in hundreds and hundreds of text answers on qualitative surveys that have no rhyme nor reason to them. Comments like “fix the potholes on Elm Street” don’t mean anything without #context, even if a bunch of people say them. […]

#Election success now is about saying you will do a thing, then doing something and successfully externalizing all the bits that didn’t work so you can take credit for the small thing you did. If people buy what you are selling, you will get re-elected.[…]

But there are ways out of this state of affairs.«

chriscorrigan.com/parkinglot/c

Chris Corrigan · Community is participatoryConfirmed yet again that the way to build community, and indeed strengthen participatory and democratic societies is to do work together. Peter Levine, who I feel like everyone should read, has a n…

Content moderation is, inherently, a subjective practice.

Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible.

By definition, content moderation is always going to rely on judgment calls,
and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly.

Indeed, one of the problems of content moderation that we’ve highlighted over the years is that to make good decisions you often need a tremendous amount of #context,
and there’s simply no way to adequately provide that at scale in a manner that actually works.

That is, when doing content moderation at scale, you need to set rules,
but rules leave little to no room for understanding context and applying it appropriately.

And thus, you get lots of crazy edge cases that end up looking bad.

We’ve seen this directly.

Last year, when we turned an entire conference of “content moderation” specialists into content moderators for an hour,
we found that there were exactly zero cases where we could get all attendees to agree on what should be done in any of the eight cases we presented.

Further, people truly underestimate the impact that “#scale” has on this equation.

Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day,
but large platforms are dealing with way more than that.

If you assume that there are 1 million decisions made every day,
even with 99.9% “accuracy”
(and, remember, there’s no such thing, given the points above),
you’re still going to “miss” 1,000 calls.

But 1 million is nothing.
On Facebook alone a recent report noted that there are 350 million photos uploaded every single day.

And that’s just photos.
If there’s a 99.9% accuracy rate,
it’s still going to make “mistakes” on 350,000 images.
Every. Single. Day.

So, add another 350,000 mistakes the next day. And the next. And the next. And so on.

And, even if you could achieve such high “accuracy” and with so many mistakes,
it wouldn’t be difficult for, say, a journalist to go searching and find a bunch of those mistakes
— and point them out.

This will often come attached to a line like
“well, if a reporter can find those bad calls, why can’t Facebook?”
which leaves out that Facebook DID find that other 99.9%.

Obviously, these numbers are just illustrative, but the point stands that when you’re doing content moderation at scale,
the scale part means that even if you’re very, very, very, very good, you will still make a ridiculous number of mistakes in absolute numbers every single day.

So while I’m all for exploring different approaches to content moderation,
and see no issue with people calling out failures when they (frequently) occur,
it’s important to recognize that there is no perfect solution to content moderation,
and any company, no matter how thoughtful and deliberate and careful is going to make mistakes.

Because that’s #Masnick’s #Impossibility #Theorem
— and unless you can disprove it, we’re going to assume it’s true
techdirt.com/2019/11/20/masnic

Techdirt · Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do WellAs some people know, I’ve spent a fair bit of time studying economist Kenneth Arrow whose work on endogenous growth theory and information economics influenced a lot of my thinking on the eco…

From the recent online meet-up:

– Hans & Mikael are working on column sets (mostly for magazine layout), the results are already impressive. Parallel text streams are possible (e.g. for translations). Wait for "beyond-pages.pdf".

– Keith implemented a MP function for Perlin noise (colored “clouds”).

– To avoid hyphenation in single places, don’t use \hbox{} but {\nohyphens }, it doesn’t create a box. There’s also \noligatures and \nokerns.