issue_comments
8 rows where author_association = "NONE" and issue = 1447050738 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- Call for birthday presents: if you're using Datasette, let us know how you're using it here · 8 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
1356842576 | https://github.com/simonw/datasette/issues/1886#issuecomment-1356842576 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5Q38ZQ | stevecrawshaw 18738650 | 2022-12-18T17:34:20Z | 2022-12-18T17:34:20Z | NONE | A bit late to this, but I have made an app to publish air quality data in Bristol, UK. air quality data in Bristol, UK. Next step to see if I can make a streamlit app based on this to produce some nice charts. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1316289392 | https://github.com/simonw/datasette/issues/1886#issuecomment-1316289392 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OdPtw | rtanglao 45195 | 2022-11-16T03:54:17Z | 2022-11-16T03:58:56Z | NONE | Happy Birthday Datasette! Thanks Simon!! I use datasette on everything most notably my flickr metadata SQLite DB to make art. Datasette lite on my 2019 flickr metadata is super helpful too: https://lite.datasette.io/?csv=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-flickr-sqlite-csv%2Fmain%2F2019-roland-flickr-metadata.csv Even better datasette lite on all firefox support questions from 2021: https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubusercontent.com%2Frtanglao%2Frt-kits-api3%2Fmain%2FYEARLY_CSV_FILES%2F2021-firefox-sumo-questions.db Thanks again Simon! So great! What a gift to the world!!!!!! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1314627077 | https://github.com/simonw/datasette/issues/1886#issuecomment-1314627077 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OW54F | jrdmb 11788561 | 2022-11-15T01:19:54Z | 2022-11-15T01:19:54Z | NONE | Datasette usage comments for its 5th anniversary celebration: I use Datasette and related tools for a Cosmology Researcher Talks database app project, which is described in the github Readme The app hosted on the Google Cloud Run service also uses other Datasette-related tools developed by Simon - datasette-render-markdown, csvs-to-sqlite, datasette-template-sql, and datasette-block-robots. This is one of two apps used for querying the talks database, each has it pros/cons as described in the github Readme. At present, over 170 different sites that host cosmology talks are scraped to collect new talks for import into the sqlite database. The shot-scraper and sqlite-utils tools are a major help for this. I also use the Mastodon API to get my favorites, toots, and boosts into a local database so I can do searches on the data. This was done on Twitter and was then extended to the Mastodon data. Again, sqlite-utils is an important tool for this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1314455003 | https://github.com/simonw/datasette/issues/1886#issuecomment-1314455003 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OWP3b | sachaj 17053189 | 2022-11-14T21:51:11Z | 2022-11-14T21:51:11Z | NONE | Happy Birthday Datasette! I am a librarian at the Université du Québec à Montréal (UQAM) and I've been using Datasette to publish excerpts of our library data. There are several use cases I'm working with as a proof of concept : 1. New titles list : based on reports of recent acquisitions by subject, discipline, etc. 2. List of all UQAM theses and dissertations : based on an extract of bibliographic records 3. List of all publications by UQAM Authors : based on an extract of bibliographic records See our prototype under construction here : https://datasette-bib.uqam.ca/ (some bits and pieces have been translated into French) Datasette is amazing, there is so much potential here for libraries. Thanks to Simon and all the contributors for this outstanding effort. Also sqlite-utils deserves special mention as incredibly handy and useful. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1314223118 | https://github.com/simonw/datasette/issues/1886#issuecomment-1314223118 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OVXQO | virtadpt 639730 | 2022-11-14T18:51:20Z | 2022-11-14T18:51:20Z | NONE | I use Datasette to analyze blocklists by using csv-to-sqlite to pull their contents into a database and Datasette to look around through them. I also use its REST API to query said database as part of filtering out garbage from domains found in those blocklists. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1313271719 | https://github.com/simonw/datasette/issues/1886#issuecomment-1313271719 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5ORu-n | lucapette 124274 | 2022-11-14T08:25:12Z | 2022-11-14T08:25:12Z | NONE | Nothing spectacular yet but I think this falls under "cool/cute application of datasette": improving fakedata performance for fun. tl;dr I used datasette to visualize benchmarking data. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1312898318 | https://github.com/simonw/datasette/issues/1886#issuecomment-1312898318 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OQT0O | eigenfoo 19851673 | 2022-11-14T00:52:16Z | 2022-11-14T00:52:16Z | NONE | I'm a cryptic crossword enthusiast and have spent a lot of time scraping and parsing cryptic crossword clues from various blogs, forums and publications. The result is over half a million clues from cryptic crosswords over the past twelve years, including the clue, answer, puzzle date, puzzle name and a link to the original source. This is all hosted using Datasette, which has been a delight to use: https://cryptics.georgeho.org/ This dataset is a significant work of crossword archivism and scholarship, as acquiring historical crosswords and structuring their contents require focused effort and tedious cleaning that few are willing to do for such trivial data - for example, according to this 2004 selection guide, the Library of Congress explicitly does not collect crossword puzzles. Anecdotally, I know that many constructors/setters of cryptic crosswords use this dataset as a resource, and some even simply call it "the database" - this is probably one of the most impactful data projects I've worked on! |
{ "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 | |
1312814245 | https://github.com/simonw/datasette/issues/1886#issuecomment-1312814245 | https://api.github.com/repos/simonw/datasette/issues/1886 | IC_kwDOBm6k_c5OP_Sl | noslouch 2090382 | 2022-11-13T20:28:26Z | 2022-11-13T20:28:26Z | NONE | I work at The Wall Street Journal as a computational journalist and serve as our self-appointed Datasette evangelist. They say that to a hammer everything looks like a nail, but the reality is newsrooms find themselves in a sea of nails! I've only got a couple public projects that I can share, but happy to offer you a look at some of the internal projects. More often than not the internal projects stay internal because the reporting doesn't lead anywhere or I can't convince an editor to greenlight it. But imho that's the beauty of datasette: a (relatively) painless mechanism to see if there's any there there.
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Call for birthday presents: if you're using Datasette, let us know how you're using it here 1447050738 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [issue] INTEGER REFERENCES [issues]([id]) , [performed_via_github_app] TEXT); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 8