issue_comments
3 rows where issue = 396212021 and user = 127565 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- base_url configuration setting · 3 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
641908346 | https://github.com/simonw/datasette/issues/394#issuecomment-641908346 | https://api.github.com/repos/simonw/datasette/issues/394 | MDEyOklzc3VlQ29tbWVudDY0MTkwODM0Ng== | wragge 127565 | 2020-06-10T10:22:54Z | 2020-06-10T10:22:54Z | CONTRIBUTOR | There's a working demo here: https://github.com/wragge/datasette-test And if you want something that's more than just proof-of-concept, here's a notebook which does some harvesting from web archives and then displays the results using Datasette: https://nbviewer.jupyter.org/github/GLAM-Workbench/web-archives/blob/master/explore_presentations.ipynb |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
base_url configuration setting 396212021 | |
604166918 | https://github.com/simonw/datasette/issues/394#issuecomment-604166918 | https://api.github.com/repos/simonw/datasette/issues/394 | MDEyOklzc3VlQ29tbWVudDYwNDE2NjkxOA== | wragge 127565 | 2020-03-26T00:56:30Z | 2020-03-26T00:56:30Z | CONTRIBUTOR | Thanks! I'm trying to launch Datasette from within a notebook using the jupyter-server-proxy and the new My test repository is here: https://github.com/wragge/datasette-test |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
base_url configuration setting 396212021 | |
602907207 | https://github.com/simonw/datasette/issues/394#issuecomment-602907207 | https://api.github.com/repos/simonw/datasette/issues/394 | MDEyOklzc3VlQ29tbWVudDYwMjkwNzIwNw== | wragge 127565 | 2020-03-23T23:12:18Z | 2020-03-23T23:12:18Z | CONTRIBUTOR | This would also be useful for running Datasette in Jupyter notebooks on Binder. While you can use Jupyter-server-proxy to access Datasette on Binder, the links are broken. Why run Datasette on Binder? I'm developing a range of Jupyter notebooks that are aimed at getting humanities researchers to explore data from libraries, archives, and museums. Many of them are aimed at researchers with limited digital skills, so being able to run examples in Binder without them installing anything is fantastic. For example, there are a series of notebooks that help researchers harvest digitised historical newspaper articles from Trove. The metadata from this harvest is saved as a CSV file that users can download. I've also provided some extra notebooks that use Pandas etc to demonstrate ways of analysing and visualising the harvested data. But it would be really nice if, after completing a harvest, the user could spin up Datasette for some initial exploration of their harvested data without ever leaving their browser. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
base_url configuration setting 396212021 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1