html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app
https://github.com/simonw/datasette/issues/2177#issuecomment-1708777964,https://api.github.com/repos/simonw/datasette/issues/2177,1708777964,IC_kwDOBm6k_c5l2eHs,9599,2023-09-06T17:04:30Z,2023-09-06T17:04:30Z,OWNER,"Here's the thinking: setting the`DATASETTE_INTERNAL` environment variable for your entire machine will cause any time you run `datasette x.db` to result in a process that shares the same `internal.db` database as other instances.
You might run more than one instance at once (I often have 4 or 5 going). This would currently break, because they would over-write each other's catalog tables:
https://github.com/simonw/datasette/blob/e4abae3fd7a828625d00c35c316852ffbaa5ef2f/datasette/utils/internal_db.py#L5-L62
The breaking wouldn't be obvious because the catalog tables aren't used by any features yet, but it's still bad.
This convinced us that actually we should move those `catalog_` tables OUT of `internal.db`. The `_internal` database will be reserved for plugins that want to use it for caching, storing progress, etc.
I think we move them to an in-memory `_catalog` database which is excluded from `ds.databases` (like `_internal` is ) but can be accessed using `datasette.get_catalog_database()` - similar to `datasette.get_internal_database()`.
So each instance of Datasette gets its own truly private `_catalog`, which is in-memory and so gets cleared at the end of each process.
An interesting thing that came up about a shared `_internal` database is that it provides opportunities for things like a notes plugin which allows you to attach notes to any row in any table in a database... where those notes become available to multiple Datasette instances that you might launch on the same laptop.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",1884408624,