issues
17 rows where repo = 107914493, state = "open" and user = 536941 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | pull_request | body | repo | type | active_lock_reason | performed_via_github_app | reactions | draft | state_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2028698018 | I_kwDOBm6k_c5463mi | 2213 | feature request: gzip compression of database downloads | fgregg 536941 | open | 0 | 1 | 2023-12-06T14:35:03Z | 2023-12-06T15:05:46Z | CONTRIBUTOR | At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed. this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application" (see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/) |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959137143 | MDU6SXNzdWU5NTkxMzcxNDM= | 1415 | feature request: document minimum permissions for service account for cloudrun | fgregg 536941 | open | 0 | 4 | 2021-08-03T13:48:43Z | 2023-11-05T16:46:59Z | CONTRIBUTOR | Thanks again for such a powerful project. For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions. It would be great to document what those minimum permission that need to be set in the IAM. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1163369515 | I_kwDOBm6k_c5FV5wr | 1655 | query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data | fgregg 536941 | open | 0 | 8 | 2022-03-09T00:56:40Z | 2023-10-17T21:53:17Z | CONTRIBUTOR | is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb. it's using over a 1G on chrome 99. i found this because, i was trying to figure out why editing the SQL was getting very slow. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1426379903 | PR_kwDOBm6k_c5BtJNn | 1870 | don't use immutable=1, only mode=ro | fgregg 536941 | open | 0 | 7 | 2022-10-27T23:33:04Z | 2023-10-03T19:12:37Z | CONTRIBUTOR | simonw/datasette/pulls/1870 | Opening db files in immutable mode sometimes leads to the file being mutated, which causes duplication in the docker image layers: see #1836, #1480 That this happens in "immutable" mode is surprising, because the sqlite docs say that setting this should open the database as read only. https://www.sqlite.org/c3ref/open.html
Perhaps this is a bug in sqlite? :books: Documentation preview :books:: https://datasette--1870.org.readthedocs.build/en/1870/ |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1822813627 | I_kwDOBm6k_c5spe27 | 2108 | some (many?) SQL syntax errors are not throwing errors with a .csv endpoint | fgregg 536941 | open | 0 | 0 | 2023-07-26T16:57:45Z | 2023-07-26T16:58:07Z | CONTRIBUTOR | here's a CTE query that should always fail with a syntax error:
when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B same with this bad sql
vs but, datasette catches this bad sql at both endpoints:
https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/2108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1555701851 | PR_kwDOBm6k_c5IdsD7 | 2003 | Show referring tables and rows when the referring foreign key is compound | fgregg 536941 | open | 0 | 3 | 2023-01-24T21:31:31Z | 2023-01-25T18:44:42Z | CONTRIBUTOR | simonw/datasette/pulls/2003 | sqlite foreign keys can be compound, but that is not as well supported by datasette as single column foreign keys. in particular,
Both of these issues are discussed in #1099. This PR only fixes the second one, because it's not clear what the right UX is for the first issue. Some things that might not be desirable about this approach.
|
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/2003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
1509783085 | I_kwDOBm6k_c5Z_XYt | 1969 | sql-formatter javascript is not now working with CloudFlare rocketloader | fgregg 536941 | open | 0 | 0 | 2022-12-23T21:14:06Z | 2023-01-10T01:56:33Z | CONTRIBUTOR | This is probably not a bug with datasette, but I thought you might want to know, @simonw. I noticed today that my CloudFlare proxied datasette instance lost the "Format SQL" option. I'm pretty sure it was there last week. In the CloudFlare settings, if I turn off Rocket Loader, I get the "Format SQL" option back. Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading? I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89 |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1400374908 | I_kwDOBm6k_c5TeAZ8 | 1836 | docker image is duplicating db files somehow | fgregg 536941 | open | 0 | 13 | 2022-10-06T22:35:54Z | 2022-10-08T16:56:51Z | CONTRIBUTOR | if you look into the docker image created by docker publish, the here's the result of the inspect command: |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1200224939 | I_kwDOBm6k_c5Hifqr | 1707 | [feature] expanded detail page | fgregg 536941 | open | 0 | 1 | 2022-04-11T16:29:17Z | 2022-04-11T16:33:00Z | CONTRIBUTOR | Right now, if click on the detail page for a row you get the info for the row and links to related tables: It would be very cool if there was an option to expand the rows of the related tables from within this detail view. If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1077620955 | I_kwDOBm6k_c5AOzDb | 1549 | Redesign CSV export to improve usability | fgregg 536941 | open | 0 | Datasette 1.0 3268330 | 5 | 2021-12-11T19:02:12Z | 2022-04-04T11:17:13Z | CONTRIBUTOR | Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser Right now, if the user clicks on the CSV related to a <s>table or a</s> query, the response header for the content type is "content-type: text/plain; charset=utf-8" Most browsers will try to open a file with this content-type in the browser. This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser. It would be great if the response header could be something like
which would lead browsers to open a download dialog. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
|||||||
1096536240 | I_kwDOBm6k_c5BW9Cw | 1586 | run analyze on all databases as part of start up or publishing | fgregg 536941 | open | 0 | 1 | 2022-01-07T17:52:34Z | 2022-02-02T07:13:37Z | CONTRIBUTOR | Running It might be nice if the analyze was run as part of the start up of "serve" or "publish". |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1090810196 | I_kwDOBm6k_c5BBHFU | 1583 | consider adding deletion step of cloudbuild artifacts to gcloud publish | fgregg 536941 | open | 0 | 1 | 2021-12-30T00:33:23Z | 2021-12-30T00:34:16Z | CONTRIBUTOR | right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun. after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts. |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1079111498 | I_kwDOBm6k_c5AUe9K | 1553 | if csv export is truncated in non streaming mode set informative response header | fgregg 536941 | open | 0 | 3 | 2021-12-13T22:50:44Z | 2021-12-16T19:17:28Z | CONTRIBUTOR | streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit. it would be great if a response is truncated that an header signalling that was set in the header. i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
1033678984 | PR_kwDOBm6k_c4tjgJ8 | 1495 | Allow routes to have extra options | fgregg 536941 | open | 0 | 5 | 2021-10-22T15:00:45Z | 2021-11-19T15:36:27Z | CONTRIBUTOR | simonw/datasette/pulls/1495 | Right now, datasette routes can only be a 2-tuple of If it was possible for datasette to handle extra options, like standard Django does, it would add flexibility for plugin authors. For example, if extra options were enabled, then it would be easy to make a single table the home page (#1284). This plugin would accomplish it. ```python from datasette import hookimpl from datasette.views.table import TableView @hookimpl def register_routes(datasette): return [ (r"^/$", TableView.as_view(datasette), {'db_name': 'DB_NAME', 'table': 'TABLE_NAME'}) ] ``` |
datasette 107914493 | pull | { "url": "https://api.github.com/repos/simonw/datasette/issues/1495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
0 | ||||||
950664971 | MDU6SXNzdWU5NTA2NjQ5NzE= | 1401 | unordered list is not rendering bullet points in description_html on database page | fgregg 536941 | open | 0 | 2 | 2021-07-22T13:24:18Z | 2021-10-23T13:09:10Z | CONTRIBUTOR | Thanks for this tremendous package, @simonw! In the However, on the database page on the deployed site, it is not rendering this as a bulleted list. Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5 The documentation gives an example of using an unordered list in a |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
959710008 | MDU6SXNzdWU5NTk3MTAwMDg= | 1419 | `publish cloudrun` should deploy a more recent SQLite version | fgregg 536941 | open | 0 | 3 | 2021-08-04T00:45:55Z | 2021-08-05T03:23:24Z | CONTRIBUTOR | I recently changed from deploying a datasette using I suspect this is because they are running different versions of sqlite3.
If so, it would be great to
|
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
||||||||
951185411 | MDU6SXNzdWU5NTExODU0MTE= | 1402 | feature request: social meta tags | fgregg 536941 | open | 0 | 2 | 2021-07-23T01:57:23Z | 2021-07-26T19:31:41Z | CONTRIBUTOR | it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance |
datasette 107914493 | issue | { "url": "https://api.github.com/repos/simonw/datasette/issues/1402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [pull_request] TEXT, [body] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT , [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);