home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where author_association = "CONTRIBUTOR" and user = 110420 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • ghing · 3 ✖

issue 1

  • Exceeding Cloud Run memory limits when deploying a 4.8G database 3

author_association 1

  • CONTRIBUTOR · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
947203725 https://github.com/simonw/datasette/issues/1480#issuecomment-947203725 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c44dS6N ghing 110420 2021-10-20T00:21:54Z 2021-10-20T00:21:54Z CONTRIBUTOR

This StackOverflow post, sqlite - Cloud Run: Why does my instance need so much RAM?, points to this section of the Cloud Run docs that says:

Note that the Cloud Run container instances run in an environment where the files written to the local filesystem count towards the available memory. This also includes any log files that are not written to /var/log/* or /dev/log.

Does datasette write any large files when starting?

Or does the COPY command in the Dockerfile count as writing to the local filesystem?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
947196177 https://github.com/simonw/datasette/issues/1480#issuecomment-947196177 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c44dRER ghing 110420 2021-10-20T00:05:10Z 2021-10-20T00:05:10Z CONTRIBUTOR

I was looking through the Dockerfile-generation code to see if there was anything that would cause memory usage to be a lot during deployment.

I noticed that the Dockerfile runs datasette --inspect. Is it possible that this is using a lot of memory usage?

Or would that come into play when running gcloud builds submit, not when it's actually deployed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
938171377 https://github.com/simonw/datasette/issues/1480#issuecomment-938171377 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c4361vx ghing 110420 2021-10-07T21:33:12Z 2021-10-07T21:33:12Z CONTRIBUTOR

Thanks for the reply @simonw. What services have you had better success with than Cloud Run for larger database?

Also, what about my issue description makes you think there may be a workaround?

Is there any instrumentation I could add to see at which point in the deploy the memory usage spikes? Should I be able to see this whether it's running under Docker locally, or do you suspect this is Cloud Run-specific?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 20.587ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows