issue_comments
12 rows where issue = 421546944 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Datasette Library · 12 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
752098906 | https://github.com/simonw/datasette/issues/417#issuecomment-752098906 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MjA5ODkwNg== | psychemedia 82988 | 2020-12-29T14:34:30Z | 2020-12-29T14:34:50Z | CONTRIBUTOR | FWIW, I had a look at Not a production thing, just an experiment trying to explore what might be possible... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
751504136 | https://github.com/simonw/datasette/issues/417#issuecomment-751504136 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MTUwNDEzNg== | drewda 212369 | 2020-12-27T19:02:06Z | 2020-12-27T19:02:06Z | NONE | Very much looking forward to seeing this functionality come together. This is probably out-of-scope for an initial release, but in the future it could be useful to also think of how to run this is a container'ized context. For example, an immutable datasette container that points to an S3 bucket of SQLite DBs or CSVs. Or an immutable datasette container pointing to a NFS volume elsewhere on a Kubernetes cluster. |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
751127485 | https://github.com/simonw/datasette/issues/417#issuecomment-751127485 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MTEyNzQ4NQ== | simonw 9599 | 2020-12-24T22:58:05Z | 2020-12-24T22:58:05Z | OWNER | That's a great idea. I'd ruled that out because working with the different operating system versions of those is tricky, but if |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
751127384 | https://github.com/simonw/datasette/issues/417#issuecomment-751127384 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDc1MTEyNzM4NA== | dyllan-to-you 1279360 | 2020-12-24T22:56:48Z | 2020-12-24T22:56:48Z | NONE | Instead of scanning the directory every 10s, have you considered listening for the native system events to notify you of updates? I think python has a nice module to do this for you called watchdog |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
586599424 | https://github.com/simonw/datasette/issues/417#issuecomment-586599424 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDU4NjU5OTQyNA== | psychemedia 82988 | 2020-02-15T15:12:19Z | 2020-02-15T15:12:33Z | CONTRIBUTOR | So could the polling support also allow you to call sqlite_utils to update a database with csv files? (Though I'm guessing you would only want to handle changed files? Do your scrapers check and cache csv datestamps/hashes?) |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
586066798 | https://github.com/simonw/datasette/issues/417#issuecomment-586066798 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDU4NjA2Njc5OA== | simonw 9599 | 2020-02-14T02:24:54Z | 2020-02-14T02:24:54Z | OWNER | I'm going to move this over to a draft pull request. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
586065843 | https://github.com/simonw/datasette/issues/417#issuecomment-586065843 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDU4NjA2NTg0Mw== | simonw 9599 | 2020-02-14T02:20:53Z | 2020-02-14T02:20:53Z | OWNER | MVP for this feature: just do it once on startup, don't scan for new files every X seconds. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
586047525 | https://github.com/simonw/datasette/issues/417#issuecomment-586047525 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDU4NjA0NzUyNQ== | simonw 9599 | 2020-02-14T01:03:43Z | 2020-02-14T01:59:02Z | OWNER | OK, I have a plan. I'm going to try and implement this is a core Datasette feature (no plugins) with the following design:
To check if a file is valid SQLite, Datasette will first check if the first few bytes of the file are |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
586047995 | https://github.com/simonw/datasette/issues/417#issuecomment-586047995 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDU4NjA0Nzk5NQ== | simonw 9599 | 2020-02-14T01:05:20Z | 2020-02-14T01:05:20Z | OWNER | I'm going to add two methods to the Datasette class to help support this work (and to enable exciting new plugin opportunities in the future):
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
474280581 | https://github.com/simonw/datasette/issues/417#issuecomment-474280581 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDQ3NDI4MDU4MQ== | psychemedia 82988 | 2019-03-19T10:06:42Z | 2019-03-19T10:06:42Z | CONTRIBUTOR | This would be really interesting but several possibilities in use arise, I think? For example:
CSV files may also have messy names compared to the table you want. Or for an update CSV, may have the form |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
473312514 | https://github.com/simonw/datasette/issues/417#issuecomment-473312514 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDQ3MzMxMjUxNA== | simonw 9599 | 2019-03-15T14:42:07Z | 2019-03-17T22:12:30Z | OWNER | A neat ability of Datasette Library would be if it can work against other files that have been dropped into the folder. In particular: if a user drops a CSV file into the folder, how about automatically converting that CSV file to SQLite using sqlite-utils? |
{ "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 | |
473308631 | https://github.com/simonw/datasette/issues/417#issuecomment-473308631 | https://api.github.com/repos/simonw/datasette/issues/417 | MDEyOklzc3VlQ29tbWVudDQ3MzMwODYzMQ== | simonw 9599 | 2019-03-15T14:32:13Z | 2019-03-15T14:32:13Z | OWNER | This would allow Datasette to be easily used as a "data library" (like a data warehouse but less expectation of big data querying technology such as Presto). One of the things I learned at the NICAR CAR 2019 conference in Newport Beach is that there is a very real need for some kind of easily accessible data library at most newsrooms. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Datasette Library 421546944 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 4