home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

6 rows where issue = 1193090967 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • eyeseast 4
  • simonw 2

author_association 2

  • CONTRIBUTOR 4
  • OWNER 2

issue 1

  • Proposal: datasette query · 6 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions issue performed_via_github_app
1094453751 https://github.com/simonw/datasette/issues/1699#issuecomment-1094453751 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BPAn3 eyeseast 25778 2022-04-11T01:32:12Z 2022-04-11T01:32:12Z CONTRIBUTOR

Was looking through old issues and realized a bunch of this got discussed in #1101 (including by me!), so sorry to rehash all this. Happy to help with whatever piece of it I can. Would be very excited to be able to use format plugins with exports.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092386254 https://github.com/simonw/datasette/issues/1699#issuecomment-1092386254 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHH3O eyeseast 25778 2022-04-08T02:39:25Z 2022-04-08T02:39:25Z CONTRIBUTOR

And just to think this through a little more, here's what stream_geojson might look like:

python async def stream_geojson(datasette, columns, rows, database, stream): db = datasette.get_database(database) for row in rows: feature = await row_to_geojson(row, db) stream.write(feature + "\n") # just assuming newline mode for now

Alternately, that could be an async generator, like this:

python async def stream_geojson(datasette, columns, rows, database): db = datasette.get_database(database) for row in rows: feature = await row_to_geojson(row, db) yield feature

Not sure which makes more sense, but I think this pattern would open up a lot of possibility. If you had your stream_indented_json function, you could do yield from stream_indented_json(rows, 2) and be one your way.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092370880 https://github.com/simonw/datasette/issues/1699#issuecomment-1092370880 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHEHA eyeseast 25778 2022-04-08T02:07:40Z 2022-04-08T02:07:40Z CONTRIBUTOR

So maybe render_output_render returns something like this:

python @hookimpl def register_output_renderer(datasette): return { "extension": "geojson", "render": render_geojson, "stream": stream_geojson, "can_render": can_render_geojson, }

And stream gets an iterator, instead of a list of rows, so it can efficiently handle large queries. Maybe it also gets passed a destination stream, or it returns an iterator. I'm not sure what makes more sense. Either way, that might cover both CLI exports and streaming responses.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092361727 https://github.com/simonw/datasette/issues/1699#issuecomment-1092361727 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHB3_ simonw 9599 2022-04-08T01:47:43Z 2022-04-08T01:47:43Z OWNER

A render mode for that plugin hook that writes to a stream is exactly what I have in mind: - #1062

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092357672 https://github.com/simonw/datasette/issues/1699#issuecomment-1092357672 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BHA4o eyeseast 25778 2022-04-08T01:39:40Z 2022-04-08T01:39:40Z CONTRIBUTOR

My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me.

That's my thinking, too. It's really the thing I've been wanting since writing datasette-geojson, since I'm always exporting with datasette --get. The workflow I'm always looking for is something like this:

sh cd alltheplaces-datasette datasette query dunkin_in_suffolk -f geojson -o dunkin_in_suffolk.geojson

I think this probably needs either a new plugin hook separate from register_output_renderer or a way to use that without going through the HTTP stack. Or maybe a render mode that writes to a stream instead of a response. Maybe there's a new key in the dictionary that register_output_renderer returns that handles CLI exports.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  
1092321966 https://github.com/simonw/datasette/issues/1699#issuecomment-1092321966 https://api.github.com/repos/simonw/datasette/issues/1699 IC_kwDOBm6k_c5BG4Ku simonw 9599 2022-04-08T00:20:32Z 2022-04-08T00:20:56Z OWNER

If we do this I'm keen to have it be more than just an alternative to the existing sqlite-utils command - especially since if I add sqlite-utils as a dependency of Datasette in the future that command will be installed as part of pip install datasette anyway.

My best thought on how to differentiate them so far is plugins: if Datasette plugins that provide alternative outputs - like .geojson and .yml and suchlike - also work for the datasette query command that would make a lot of sense to me.

One way that could work: a --fmt geojson option to this command which uses the plugin that was registered for the specified extension.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
Proposal: datasette query 1193090967  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
, [performed_via_github_app] TEXT);
CREATE INDEX [idx_issue_comments_issue]
                ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
                ON [issue_comments] ([user]);
Powered by Datasette · Queries took 18.736ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows