Preparing for new mapping workflow is not much visible change. (The teaser are two new scripts of https://git.sr.ht/~qeef/damn-client.py)
Make a community presentation at an #OpenStreetMap Foundation #OSMF Board Meeting. Are you part of a local chapter, or a local community, putting together an organized project, or developing software? The board would love to hear from you.
I will be off the net for the weekend, I would make the release just now otherwise.
I think that something like this helps developers getting oriented in the #damn project. Maybe administrators, too. This graph is not meant for users/mappers.
- Solid links are docker-compose config-related.
- Dashed links are HTTP requests.
- Dotted is just-not-used-now connection.
I've summarized the main points of the thread in the #OpenStreetMap diary https://www.openstreetmap.org/user/qeef/diary/396811
This is forgotten I/O utilization graph for the last "average" test of 200 mappers and 2 areas.
I did another round of "average" load testing for 200 mappers and 2 areas to maintain the compatibility with https://en.osm.town/web/statuses/106189249266359926 and https://en.osm.town/web/statuses/106189292966532711 (the same number of mappers, the same number of areas.)
The interesting time is at start 11:40 -- spawning of mappers. Next, around 12:13 to 12:18, where mapping is finished (i.e. 0 % to map). Finally, 12:45 -- upkeep script runs while server is returning 404: no square to map. (I checked it with `journalctl -u damn_upkeep.service -f`.)
So the results looks more like the "extreme" results when changing times towards the "extreme" load times (https://git.sr.ht/~qeef/damn-server/tree/master/item/tests/mapathon.py) I will keep "extreme" and "average"clearly distinguished.
What also influence the results is locust _spawn rate_ (mappers spawned/second) I will keep it such that all the mappers are spawned within the first minute.
Also, actual #postgis database parameters are https://git.sr.ht/~qeef/damn-deploy/commit/5b5bf8cc7c819b1bb76f5a37d07e32ceac606616
First round of endurance load testing. Started at 12:00, 0% to map at 15:00, non-recoverable CPU load after 18:00.
My guess is that it's upkeep scripts.
Also, I think that I should speed up waiting of locust average mappers from 30 - 60 seconds to 2 - 5 seconds. I will let the mapping time of 30 - 60 seconds, though.
I think that 200 mappers (30 s mapping, 30 s waiting) with 4 areas is the top wall for load testing. I've tested 300 mappers with 6 areas for about 10 minutes and I don't feel confident about it.
The results of load testing for the server of https://www.damn-project.org/ with 4 areas and 200 mappers (all spawned in the first minute,) and "average" load scenario: 30 - 60 s mapping, 30 - 60 s waiting (https://git.sr.ht/~qeef/damn-server/tree/master/item/tests/mapathon.py) are attached.
Run for 1.5 hour approximately -- after that time there were 0 % to map for all of the areas so the results could become biased.
Server: 1x 2.2 GHz vCPU, 1 GB RAM, 25 GB SSD.
(The images are from the "extreme" scenario of load testing https://git.sr.ht/~qeef/damn-server/tree/master/item/tests/mapathon.py run for few minutes only.)
I have encountered more image descriptions on Mastodon in 24 hours than I have in Twitter in a couple of years. Seriously. I'm not exaggerating.
As a blind person, this means a lot to me. If you read this and you describe your images, thank you so, so, so much on behalf of all of us. If you don't, now you know you'll be helping random Internet strangers make sense of your posts by typing in a few more words than usual.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!