home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 642296989 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • bram2000 2
  • simonw 1

author_association 2

  • NONE 2
  • OWNER 1

issue 1

  • Consider pagination of canned queries · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
845985439 https://github.com/simonw/datasette/issues/856#issuecomment-845985439 https://api.github.com/repos/simonw/datasette/issues/856 MDEyOklzc3VlQ29tbWVudDg0NTk4NTQzOQ== bram2000 5268174 2021-05-21T14:22:41Z 2021-05-21T14:22:41Z NONE

Thanks Simon this is working very well.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Consider pagination of canned queries 642296989  
843291675 https://github.com/simonw/datasette/issues/856#issuecomment-843291675 https://api.github.com/repos/simonw/datasette/issues/856 MDEyOklzc3VlQ29tbWVudDg0MzI5MTY3NQ== simonw 9599 2021-05-18T15:56:45Z 2021-05-18T15:56:45Z OWNER

Tables and views get "stream all rows" at the moment, so one workaround is to define a SQL view for your query - this only works for queries that don't take any parameters though (although you may be able to define a view and then pass it extra fields using the Datasette table interface, like on https://latest.datasette.io/fixtures/paginated_view?content_extra__contains=9)

I've explored this problem in a bit more detail in https://githu.com/simonw/django-sql-dashboard and I think I have a pattern that could work.

For your canned query, you could implement the pattern yourself by setting up two canned queries that look something like this:

https://github-to-sqlite.dogsheep.net/github?sql=select+rowid%2C+sha%2C+author_date+from+commits+order+by+rowid+limit+1000

sql select rowid, sha, author_date from commits order by rowid limit 1000 That gets you the first set of 1,000 results. The important thing here is to order by a unique column, in this case rowid - because then subsequent pages can be loaded by a separate canned query that looks like this: sql select rowid, sha, author_date from commits where rowid > :after order by rowid limit 1000 https://github-to-sqlite.dogsheep.net/github?sql=select+rowid%2C+sha%2C+author_date+from+commits+where+rowid+%3E+%3Aafter+order+by+rowid+limit+1000&after=1000

You then need to write code which knows how to generate these queries - start with the first query with no where clause (or if you are using rowid you can just use the second query and pass it ?after=0 for the first call) - then keep calling the query passing in the last rowid you recieved as the after parameter.

Basically this is an implementation of keyset pagination with a smart client. When Datasette grows the ability to do this itself it will work by executing this mechanism inside the Python code, which is how the "stream all rows" option for tables works at the moment.

{
    "total_count": 1,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 1,
    "rocket": 0,
    "eyes": 0
}
Consider pagination of canned queries 642296989  
843065142 https://github.com/simonw/datasette/issues/856#issuecomment-843065142 https://api.github.com/repos/simonw/datasette/issues/856 MDEyOklzc3VlQ29tbWVudDg0MzA2NTE0Mg== bram2000 5268174 2021-05-18T10:49:11Z 2021-05-18T10:49:29Z NONE

Hi Simon, I'm using a canned query to do some geospatial stuff, but it maxes out at 1000 rows returned. I can't see any Link headers to follow to get the next page of data. Is there any way currently to work around this 1000 row limit for canned queries?

Thanks, Jon

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Consider pagination of canned queries 642296989  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 20.778ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows