home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 527670799 and user = 172847 sorted by updated_at descending

✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • pkoppstein · 3 ✖

issue 1

  • updating metadata.json without recreating the app · 3 ✖

author_association 1

  • NONE 3
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
559916057 https://github.com/simonw/datasette/issues/639#issuecomment-559916057 https://api.github.com/repos/simonw/datasette/issues/639 MDEyOklzc3VlQ29tbWVudDU1OTkxNjA1Nw== pkoppstein 172847 2019-11-30T06:08:50Z 2019-11-30T06:08:50Z NONE

@simonw, @jacobian - I was able to resolve the metadata.json issue by adding -m metadata.json to the Procfile. Now git push heroku master picks up the changes, though I have the impression that heroku is doing more work than necessary (e.g. one of the information messages is: Installing requirements with pip).

I also had to set the environment variable WEB_CONCURRENCY -- I used WEB_CONCURRENCY=1.

I am still anxious to know whether it's possible for Datasette on Heroku to access the SQLite file at another location. Cloudcube seems the most promising, and I'm hoping it can be done by tweaking the Procfile suitably, but maybe that's too optimistic?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
updating metadata.json without recreating the app 527670799  
558852316 https://github.com/simonw/datasette/issues/639#issuecomment-558852316 https://api.github.com/repos/simonw/datasette/issues/639 MDEyOklzc3VlQ29tbWVudDU1ODg1MjMxNg== pkoppstein 172847 2019-11-26T22:54:23Z 2019-11-26T22:54:23Z NONE

@jacobian - Thanks for your help. Having to upload an entire slug each time a small change is needed in metadata.json seems no better than the current situation so I probably won't go down that rabbit hole just yet. In any case, the really important goal is moving the SQLite file out of Heroku in a way that the Heroku app can still read it efficiently. Is this possible? Is Cloudcube the right place to start? Is there any alternative?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
updating metadata.json without recreating the app 527670799  
558437707 https://github.com/simonw/datasette/issues/639#issuecomment-558437707 https://api.github.com/repos/simonw/datasette/issues/639 MDEyOklzc3VlQ29tbWVudDU1ODQzNzcwNw== pkoppstein 172847 2019-11-26T03:02:53Z 2019-11-26T03:03:29Z NONE

@simonw - Thanks for the reply!

My reading of the heroku documents is that if one sets things up using git, then one can use "git push" (from a {local, GitHub, GitLab} git repository to Heroku) to "update" a Heroku deployment, but I'm not sure exactly how this works. However, assuming there is some way to use "git push" to update the Heroku deployment, the question becomes how can one do this in conjunction with datasette.

Again based on my reading the heroku documents, it would seem that the following should work (but it doesn't quite):

1) Use datasette to create a deployment (named MYAPP) 2) Put it in maintenance mode 3) heroku git:clone -a MYAPP -- This results in an empty repository (as expected) 4) In another directory, heroku slugs:download -a MYAPP 5) Copy the downloaded slug into the repository 6) Make some change to metadata.json 6) Commit and push it back 7) Take the deployment out of maintenance mode 8) Refresh the deployment

Using the heroku console, I've verified that the edits appear on heroku, but somehow they are not reflected in the running app.

I'm hopeful that with some small tweak or perhaps the addition of a bit of voodoo, this strategy will work.

I think it will be important to get this working for another reason: getting Heroku, Cloudcube, and datasette to work together, to overcome the slug size limitation so that large SQLite databases can be deployed to Heroku using Datasette.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
updating metadata.json without recreating the app 527670799  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 21.183ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows