html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,issue,performed_via_github_app
https://github.com/simonw/datasette/issues/327#issuecomment-1043626870,https://api.github.com/repos/simonw/datasette/issues/327,1043626870,IC_kwDOBm6k_c4-NHt2,208018,2022-02-17T23:37:24Z,2022-02-17T23:37:24Z,NONE,On second thought any kind of quick-to-decompress-on-startup could be helpful if we're paying for the container registry and deployment bandwidth but not ephemeral storage.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",335200136,
https://github.com/simonw/datasette/issues/327#issuecomment-1043609198,https://api.github.com/repos/simonw/datasette/issues/327,1043609198,IC_kwDOBm6k_c4-NDZu,208018,2022-02-17T23:21:36Z,2022-02-17T23:33:01Z,NONE,"On fly.io. This particular database goes from 1.4GB to 200M. Slower, part of that might be having no `--inspect-file`?
```
$ datasette publish fly ... --generate-dir /tmp/deploy-this
...
$ mksquashfs large.db large.squashfs
$ rm large.db # don't accidentally put it in the image
$ cat Dockerfile
FROM python:3.8
COPY . /app
WORKDIR /app
ENV DATASETTE_SECRET 'xyzzy'
RUN pip install -U datasette
# RUN datasette inspect large.db --inspect-file inspect-data.json
ENV PORT 8080
EXPOSE 8080
CMD mount -o loop -t squashfs large.squashfs /mnt; datasette serve --host 0.0.0.0 -i /mnt/large.db --cors --port $PORT
```
It would also be possible to copy the file onto the ~6GB available on the ephemeral container filesystem on startup. A little against the spirit of the thing? On this example the whole docker image is 2.42 GB and the squashfs version is 1.14 GB.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",335200136,