I think that something like this helps developers getting oriented in the project. Maybe administrators, too. This graph is not meant for users/mappers.

- Solid links are docker-compose config-related.
- Dashed links are HTTP requests.
- Dotted is just-not-used-now connection.

This is forgotten I/O utilization graph for the last "average" test of 200 mappers and 2 areas.

Show thread

I did another round of "average" load testing for 200 mappers and 2 areas to maintain the compatibility with en.osm.town/web/statuses/10618 and en.osm.town/web/statuses/10618 (the same number of mappers, the same number of areas.)

The interesting time is at start 11:40 -- spawning of mappers. Next, around 12:13 to 12:18, where mapping is finished (i.e. 0 % to map). Finally, 12:45 -- upkeep script runs while server is returning 404: no square to map. (I checked it with `journalctl -u damn_upkeep.service -f`.)

Show thread

First round of endurance load testing. Started at 12:00, 0% to map at 15:00, non-recoverable CPU load after 18:00.

My guess is that it's upkeep scripts.

Also, I think that I should speed up waiting of locust average mappers from 30 - 60 seconds to 2 - 5 seconds. I will let the mapping time of 30 - 60 seconds, though.

Let's see.

Show thread

I think that 200 mappers (30 s mapping, 30 s waiting) with 4 areas is the top wall for load testing. I've tested 300 mappers with 6 areas for about 10 minutes and I don't feel confident about it.

Recall that the instance of damn-project.org/ runs on $5/month VPS with 1x 2.2 GHz vCPU, 1 GB RAM, and 25 GB SSD.

Show thread

deploy v0.20.0 git.sr.ht/~qeef/damn-deploy/re includes the improvements.

The results of load testing for the server of damn-project.org/ with 4 areas and 200 mappers (all spawned in the first minute,) and "average" load scenario: 30 - 60 s mapping, 30 - 60 s waiting (git.sr.ht/~qeef/damn-server/tr) are attached.

Run for 1.5 hour approximately -- after that time there were 0 % to map for all of the areas so the results could become biased.

Server: 1x 2.2 GHz vCPU, 1 GB RAM, 25 GB SSD.

Show thread

I did some improvements to the server and the database. Testing on the localhost looks good. Next, I need to test the server of the damn-project.org/ instance.

(The images are from the "extreme" scenario of load testing git.sr.ht/~qeef/damn-server/tr run for few minutes only.)

Show thread

I had to stop the load test of

mappy.damn-project.org/?area=2
mappy.damn-project.org/?area=2

with 200 mappers after approximately 20 minutes. (The test performed since 17:00 till 17:30.) The statistics of the areas when stopped were (to map, to review, done:) 9%/66%/25%, respective 37%/47%/16%.

The reason to stop the test was increase in the resource utilization. Unfortunately, I have no database logs this time.

Show thread

damn-project.org/ runs the project on $5/month do.co instance with 1 GB RAM, single 2.2 GHz vCPU, and 25 GB SSD disk. Why not to load test it?

I run the locust.io load testing for

mappy.damn-project.org/?area=2
mappy.damn-project.org/?area=2

with 100 mappers (all spawned in the first minute,) and "average" load scenario: 30 - 60 s mapping, 30 - 60 s waiting (git.sr.ht/~qeef/damn-server/tr).

After 3 hours, the statistics of the areas were (to map, to review, done:) 0%/31%/69%, respective 0%/55%/45%.

Show thread

Overnight test failed (too many 404: no square to map -- that's not an error, but I don't want to test it.) So, I run the average test (each mapper map for 30 - 60 s, then wait for 30 - 60 s before next mapping) again for 3 hours.

For the database queries:
Max duration / Latency Percentile(99)
1 s 190 ms / 32 s 502 ms

From the locust.io testing: 31582 POST requests (the real work) with 99%ile of 420 ms!

2 uvicorn workers, 4 GB RAM, Intel i5-2520M @ 2.5 GHz. Locust.io runs on the same machine.

Show thread

I've added locust.io config file to the server and performed "extreme" load test. (i.e. I've used "extreme" load times of git.sr.ht/~qeef/damn-server/tr)

Outcomes are: (1) python server just waits for the database (and it should be like that), (2) about 1 GB RAM is sufficient, (3) need to work on SELECT queries.

Query type: Average / Percentile(99)
INSERT: 2 ms / 634 ms
SELECT: 32 ms / 57 s 834 ms

Next is "average" load test run overnight.

So the project is decentralized considering the mastodon point of view.

Show thread

There are four clients now. Lightweight one, proposing the common workflow. The panel that can be opened from the lightweight client. Mappy client for manual square locking and merging. And finally, JOSM plugin.

client.damn-project.org/
mappy.damn-project.org/
damn-project.org/

This is current architecture. Solid links are docker-compose config-related connections. Dashed are HTTP connections. Dotted is not used now.

Dot file with the architecture is here git.sr.ht/~qeef/damn-deploy/tr

I am working on the project refactor/re-design. The web client is slowly getting done. (Changes on the server side, too, so it's not compatible with the current damn-project.org/ server. Also, I just see I need to work on statistics yet.)

One big difference will happen. I will allow only common workflow in the _light_ client. That, of course, implies there will be some new _mappy_ client ^^

Nice @pistangmapa question about client last Friday: "What about low bandwidth connections?"

I tried to test it but not sure how to do it exactly. If you have better idea, I would be happy to know it.

I tried Chromium "Performance" with custom setting for GPRS (50 kbps download, 20 kbps upload, 500 ms latency.)

I tested two requests from the server:

- server.damn-project.org/areas (20 areas, 4.0 kB)
- server.damn-project.org/area/7 (295 commits, 44.0 kB)

which is comparable to downloaded client.

I could not resist. I put 4000 areas to the server. (HOT TM has 3,093 projects.)

The damn server returns all 4,023 areas in comparable times to TM returning 14 of 3,093.

And I am not talking about "Downloaded client".

I would really love to see some reasonable load testing of Tasking Manager by .

Show thread

The client is static web page. So I added "Download client" link to the bottom. Why?

1. Stable client:

client.damn-project.org/

-- too slow as the damn deploy is not optimized.

2. Testing client:

damn-project.gitlab.io/damn_cl

-- not bad but still need to download some JavaScript, etc.

3. Downloaded client:

client.damn-project.org/damn-c

-- just save it to your desktop and you need to get data from server only.

Show older
En OSM Town | Mapstodon for OpenStreetMap

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!