en.osm.town is one of the many independent Mastodon servers you can use to participate in the fediverse.
An independent, community of OpenStreetMap people on the Fediverse/Mastodon. Funding graciously provided by the OpenStreetMap Foundation.

Server stats:

250
active users

#supercomputers

1 post1 participant0 posts today

If #NSF Snoozes, #TACC’s “#Horizon#Supercomputer Loses
#NationalScienceFoundation is front and center on proposed cuts in spending by Trump, those cuts, if they should come to pass, will undercut scientific research and computing tools that make it possible.
NSF is conduit for lots of #research and half of the spending in #US for reasonably large capability-class (strong scaling for large workloads) and capacity-class (weaker scaling for many workloads) #supercomputers.
nextplatform.com/2025/04/28/if

The Next Platform · If NSF Snoozes, Then TACC’s “Horizon” Supercomputer LosesIt is a tumultuous time for any agency in the US government or any company or organization that depends on the US government for a sizable portion of its

Some warnings about weather data when using -->
earth.nullschool.net

Text is:

#Weather and #climate data shown on this website and countless others are at risk.

The National Oceanic and Atmospheric Administration (#NOAA) is the U.S. agency responsible for global weather #forecasting, #hurricane #prediction, #ocean #observation, and many other services vital to public safety. Its #satellites, #supercomputers, and research teams provide essential #data that help us understand our #planet and #protect #lives.

On February 27, the new U.S. administration initiated mass firings at NOAA. These actions are #unethical and deeply #disruptive to the talented #scientists and #engineers who dedicate themselves to the public good. The firings, along with expected budget cuts, have serious implications for the availability and #quality of #WeatherForecasts produced by the #UnitedStates. They must be reversed immediately.

Much of the data on this website is downloaded directly from NOAA's servers. In this environment of uncertainty, access could be #disrupted at any time. While I'll strive to keep all features on this website functional and switch to alternative data sources if necessary, some datasets have no substitute if they go offline.

If this concerns you, speak up. Share on #SocialMedia. And if you're in the U.S., contact your representatives.

#FacialRecognition software for people with faceblindness? :flan_excite:

#EmotionalRecognition software for people who have trouble reading members of the normative society? :flan_heart:

Complex #AI that can learn to recognize and understand small gestures by paralized people to make communication possible? :flan_hug:

#SuperComputers or whatever to help scientists and engineers develop tools that make live easier for marginalized people? :flan_hearts:

#ChatBots that increase everybody's prejudices? :flan_facepalm:

#China releases #Top100 #supercomputer list for #2024: No #ExaFLOPS systems mentioned, obfuscation continues
Only difference between 2023 and 2024 #supercomputers is minor upgrades. However, Top 100 list of supercomputers may not accurately reflect China's #supercomputing capabilities. Jack Dongarra, a supercomputer industry luminary and co-founder of #Top500 last year said that China has at least three ExaFLOPS-class machines with performance ranging 1.3 to 1.7 ExaFLOPS
tomshardware.com/tech-industry

Tom's Hardware · China releases Top 100 supercomputer list for 2024: No ExaFLOPS systems mentioned, obfuscation continuesBy Anton Shilov

#AI #GPU clusters with one million GPUs are planned for #2027
#Broadcom predicts three of its clients, #hyperscalers, will deploy AI #supercomputers with one million #XPU in fiscal 2027. #HockTan, President and CEO, at Q4 2024 earnings call said "In 2027, we believe each of them plans to deploy 1,000,000 XPU clusters across a single fabric." It was rumored #ByteDance and #OpenAI teamed up with Broadcom to develop their AI chips. Broadcom, of course, does not mention names
tomshardware.com/tech-industry

Tom's Hardware · AI GPU clusters with one million GPUs are planned for 2027 — Broadcom says three AI supercomputers are in the worksBy Anton Shilov

#EuroHPC is the European organisation for #supercomputers, run by EU, participating states and private partners. They're all set to chip in to finance Europe's leading computers and programs giving access to them.

But last week, the European Court of Auditors published a report criticising one part of the arrangement: The private partners' funding. From 2021 to 2027, they're planned to contribute 900 million Euros. But so far, it's only been 0,8 million Euros - less than one percent of the set amount.

The reason for this: The private partners are only allowed to contribute in very limited cases: Not to infrastructure, but to grants.

The EU Commission & EuroHPC say that EuroHPC has been busy building up hardware and will now concentrate more on areas where private partners can chip in.

Still, this is not entirely unforeseen: In the last financing timespan, up to 2020, private partners were set to contribute 420 million Euros and did not come close to that. So the auditors ask: Why was their contribution more than doubled?

My full article in German at @TspBackgroundDi: background.tagesspiegel.de/dig

And here's the Court of Auditors' report: eca.europa.eu/en/publications/

#HPC#Europe#EU

#DOE drops $23M in effort to reinvigorate #supercomputing
Challenges span energy efficiency, memory, programmability, and national security

DOE’s #NewFrontiers Program Aims to Tackle #HPC Bottlenecks with $23M Investment

Under the New Frontiers initiative, the agency will solicit the help of private companies to develop technologies that could help to scale compute more efficiently for the next generation of post-#exascale #supercomputers.
hpcwire.com/off-the-wire/does-

HPCwireDOE’s New Frontiers Program Aims to Tackle HPC Bottlenecks with $23M InvestmentWASHINGTON, Sept. 9, 2024 — The Department of Energy’s (DOE) Office of Science has announced a new research and development opportunity led by Oak Ridge National Laboratory (ORNL) to advance technologies […]

#Linux runs #tech.

#Supercomputers guide research in subjects ranging from #quantummechanics to #climatechange, #AI to #cosmology - and each of the #TOP500 runs on #opensource.

Sick of telemetrics, bloatware, costly upgrades, and unplanned updates on your PC?

#UpgradetoLinux, and enjoy the same #ITsecurity, longevity, and performance of leading innovation. For free: t.ly/qTey4

I get ridiculed by young JavaScript and Python coders, whenever I say that parallel processing is essential to the future of computing.

The seasoned among them point out to me that the idea of #supercomputers is almost as old as me, that their iPhone can run rings round a typical supercomputer I may have used in my grad school days, and that their Python programmes running on laptops can beat anything I may have written on a CRAY in Fortran or C. Those points seem valid, but they miss the mark.

First, just outrunning a 30-year-old system is not a legitimate measure of current performance.

Secondly, if modern hardware performance has reached a level where a naïve implementation of an algorithm in a slow scripting language can beat a hand-tuned parallel programme running on an old supercomputer, then today's programmers have the ethical responsibility to optimise their software implementations by exploiting those newer, greater hardware capabilities available to them.

Thirdly, if there is so much excess hardware capacity, the software should soak that up by striving for more accuracy, more precision, more features, whatever, but not by mining bitcoins.

Lastly, just about every consumer-grade machine today—server, desktop, laptop, tablet, phone, single-board computer—is a multicore, multiprocessor monster. Programmers should be exploiting those readily available parallel resources, now. Automatic performance upgrade of sequential code by Moore's law and Dennard scaling is dead and gone. And fully automatic parallelisation of sequential code by compilers is still a distant dream.

#Parallel #programming matters—especially today.

#NASA missions delayed by #supercomputing shortcomings
NASA's old #supercomputers are causing mission delays — NASA's supercomputers still rely primarily on #CPU, and one of its flagship supercomputers uses 18,000 CPUs and 48 #GPU. NASA's Office of Inspector General report suggests that transitioning to GPUs and #code modernization is essential for meeting NASA's current and future needs.
theregister.com/2024/03/15/nas #Space