home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

7 rows where author_association = "CONTRIBUTOR" and issue = 1015646369 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • fgregg 4
  • ghing 3

issue 1

  • Exceeding Cloud Run memory limits when deploying a 4.8G database · 7 ✖

author_association 1

  • CONTRIBUTOR · 7 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1271101072 https://github.com/simonw/datasette/issues/1480#issuecomment-1271101072 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c5Lw3aQ fgregg 536941 2022-10-07T04:39:10Z 2022-10-07T04:39:10Z CONTRIBUTOR

switching from immutable=1 to mode=ro completely addressed this. see https://github.com/simonw/datasette/issues/1836#issuecomment-1271100651 for details.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
1269847461 https://github.com/simonw/datasette/issues/1480#issuecomment-1269847461 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c5LsFWl fgregg 536941 2022-10-06T11:21:49Z 2022-10-06T11:21:49Z CONTRIBUTOR

thanks @simonw, i'll spend a little more time trying to figure out why this isn't working on cloudrun, and then will flip over to fly if i can't.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
1268629159 https://github.com/simonw/datasette/issues/1480#issuecomment-1268629159 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c5Lnb6n fgregg 536941 2022-10-05T16:00:55Z 2022-10-05T16:00:55Z CONTRIBUTOR

as a next step, i'll fetch the docker image from the google registry, and see what memory and disk usage looks like when i run it locally.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
1268613335 https://github.com/simonw/datasette/issues/1480#issuecomment-1268613335 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c5LnYDX fgregg 536941 2022-10-05T15:45:49Z 2022-10-05T15:45:49Z CONTRIBUTOR

running into this as i continue to grow my labor data warehouse.

Here a CloudRun PM says the container size should not count against memory: https://stackoverflow.com/a/56570717

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
947203725 https://github.com/simonw/datasette/issues/1480#issuecomment-947203725 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c44dS6N ghing 110420 2021-10-20T00:21:54Z 2021-10-20T00:21:54Z CONTRIBUTOR

This StackOverflow post, sqlite - Cloud Run: Why does my instance need so much RAM?, points to this section of the Cloud Run docs that says:

Note that the Cloud Run container instances run in an environment where the files written to the local filesystem count towards the available memory. This also includes any log files that are not written to /var/log/* or /dev/log.

Does datasette write any large files when starting?

Or does the COPY command in the Dockerfile count as writing to the local filesystem?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
947196177 https://github.com/simonw/datasette/issues/1480#issuecomment-947196177 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c44dRER ghing 110420 2021-10-20T00:05:10Z 2021-10-20T00:05:10Z CONTRIBUTOR

I was looking through the Dockerfile-generation code to see if there was anything that would cause memory usage to be a lot during deployment.

I noticed that the Dockerfile runs datasette --inspect. Is it possible that this is using a lot of memory usage?

Or would that come into play when running gcloud builds submit, not when it's actually deployed?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  
938171377 https://github.com/simonw/datasette/issues/1480#issuecomment-938171377 https://api.github.com/repos/simonw/datasette/issues/1480 IC_kwDOBm6k_c4361vx ghing 110420 2021-10-07T21:33:12Z 2021-10-07T21:33:12Z CONTRIBUTOR

Thanks for the reply @simonw. What services have you had better success with than Cloud Run for larger database?

Also, what about my issue description makes you think there may be a workaround?

Is there any instrumentation I could add to see at which point in the deploy the memory usage spikes? Should I be able to see this whether it's running under Docker locally, or do you suspect this is Cloud Run-specific?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Exceeding Cloud Run memory limits when deploying a 4.8G database 1015646369  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 127.702ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows