home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

17 rows where state = "open", type = "issue" and user = 536941 sorted by updated_at descending

✖
✖
✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date)

repo 2

  • datasette 14
  • sqlite-utils 3

type 1

  • issue · 17 ✖

state 1

  • open · 17 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association pull_request body repo type active_lock_reason performed_via_github_app reactions draft state_reason
2028698018 I_kwDOBm6k_c5463mi 2213 feature request: gzip compression of database downloads fgregg 536941 open 0     1 2023-12-06T14:35:03Z 2023-12-06T15:05:46Z   CONTRIBUTOR  

At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed.

this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of "application"

(see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2213/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
959137143 MDU6SXNzdWU5NTkxMzcxNDM= 1415 feature request: document minimum permissions for service account for cloudrun fgregg 536941 open 0     4 2021-08-03T13:48:43Z 2023-11-05T16:46:59Z   CONTRIBUTOR  

Thanks again for such a powerful project.

For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions.

It would be great to document what those minimum permission that need to be set in the IAM.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1415/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1163369515 I_kwDOBm6k_c5FV5wr 1655 query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data fgregg 536941 open 0     8 2022-03-09T00:56:40Z 2023-10-17T21:53:17Z   CONTRIBUTOR  

this page

is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb.

it's using over a 1G on chrome 99.

i found this because, i was trying to figure out why editing the SQL was getting very slow.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1655/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1822813627 I_kwDOBm6k_c5spe27 2108 some (many?) SQL syntax errors are not throwing errors with a .csv endpoint fgregg 536941 open 0     0 2023-07-26T16:57:45Z 2023-07-26T16:58:07Z   CONTRIBUTOR  

here's a CTE query that should always fail with a syntax error:

sql with foo as (nonsense) select * from foo;

when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B

but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B

same with this bad sql

sql select a, from foo;

https://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B

vs

https://global-power-plants.datasettes.com/global-power-plants.csv?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B

but, datasette catches this bad sql at both endpoints:

sql slect a from foo;

https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/2108/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1595340692 I_kwDOCGYnMM5fFveU 530 add ability to configure "on delete" and "on update" attributes of foreign keys: fgregg 536941 open 0     2 2023-02-22T15:44:14Z 2023-05-08T20:39:01Z   CONTRIBUTOR  

sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils.

https://www.sqlite.org/foreignkeys.html#fk_actions

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1560651350 I_kwDOCGYnMM5dBaZW 523 Feature request: trim all leading and trailing white space for all columns for all tables in a database fgregg 536941 open 0     1 2023-01-28T02:40:10Z 2023-01-28T02:41:14Z   CONTRIBUTOR  

It's pretty common that i need to trim leading or trailing white space from lots of columns in a database a part of an initial ETL.

I use the following recipe a lot, and it would be great to include this functionality into sqlite-utils

trimify.sql sql select 'select group_concat(''update [' || name || '] set ['' || name || ''] = trim(['' || name || ''])'', ''; '') || ''; '' as sql_to_run from pragma_table_info('''||name||''');' from sqlite_schema;

then something like:

bash sqlite3 example.db < scripts/trimify.sql > table_trim.sql && \ sqlite3 $example.db < table_trim.sql > trim.sql && \ sqlite3 $example.db < trim.sql

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/523/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1509783085 I_kwDOBm6k_c5Z_XYt 1969 sql-formatter javascript is not now working with CloudFlare rocketloader fgregg 536941 open 0     0 2022-12-23T21:14:06Z 2023-01-10T01:56:33Z   CONTRIBUTOR  

This is probably not a bug with datasette, but I thought you might want to know, @simonw.

I noticed today that my CloudFlare proxied datasette instance lost the "Format SQL" option. I'm pretty sure it was there last week.

In the CloudFlare settings, if I turn off Rocket Loader, I get the "Format SQL" option back.

Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading?

I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1969/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1400374908 I_kwDOBm6k_c5TeAZ8 1836 docker image is duplicating db files somehow fgregg 536941 open 0     13 2022-10-06T22:35:54Z 2022-10-08T16:56:51Z   CONTRIBUTOR  

if you look into the docker image created by docker publish, the datasette inspect line is duplicating the db files.

here's the result of the inspect command:

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1836/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1310243385 I_kwDOCGYnMM5OGLo5 456 feature request: pivot command fgregg 536941 open 0     5 2022-07-20T00:58:08Z 2022-07-20T17:50:50Z   CONTRIBUTOR  

pivoting long-format table to wide-format tables is pretty common and kind of pain. would love to see this feature in sqlite-utils!

sqlite-utils 140912432 issue    
{
    "url": "https://api.github.com/repos/simonw/sqlite-utils/issues/456/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1200224939 I_kwDOBm6k_c5Hifqr 1707 [feature] expanded detail page fgregg 536941 open 0     1 2022-04-11T16:29:17Z 2022-04-11T16:33:00Z   CONTRIBUTOR  

Right now, if click on the detail page for a row you get the info for the row and links to related tables:

It would be very cool if there was an option to expand the rows of the related tables from within this detail view.

If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1707/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1077620955 I_kwDOBm6k_c5AOzDb 1549 Redesign CSV export to improve usability fgregg 536941 open 0   Datasette 1.0 3268330 5 2021-12-11T19:02:12Z 2022-04-04T11:17:13Z   CONTRIBUTOR  

Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser

Right now, if the user clicks on the CSV related to a <s>table or a</s> query, the response header for the content type is

"content-type: text/plain; charset=utf-8"

Most browsers will try to open a file with this content-type in the browser.

This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser.

It would be great if the response header could be something like

'Content-type: text/csv'); 'Content-disposition: attachment;filename=MyVerySpecial.csv');

which would lead browsers to open a download dialog.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1549/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1096536240 I_kwDOBm6k_c5BW9Cw 1586 run analyze on all databases as part of start up or publishing fgregg 536941 open 0     1 2022-01-07T17:52:34Z 2022-02-02T07:13:37Z   CONTRIBUTOR  

Running analyze; lets sqlite's query planner make much better use of any indices.

It might be nice if the analyze was run as part of the start up of "serve" or "publish".

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1586/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1090810196 I_kwDOBm6k_c5BBHFU 1583 consider adding deletion step of cloudbuild artifacts to gcloud publish fgregg 536941 open 0     1 2021-12-30T00:33:23Z 2021-12-30T00:34:16Z   CONTRIBUTOR  

right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun.

after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1583/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
1079111498 I_kwDOBm6k_c5AUe9K 1553 if csv export is truncated in non streaming mode set informative response header fgregg 536941 open 0     3 2021-12-13T22:50:44Z 2021-12-16T19:17:28Z   CONTRIBUTOR  

streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.

it would be great if a response is truncated that an header signalling that was set in the header.

i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1553/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
950664971 MDU6SXNzdWU5NTA2NjQ5NzE= 1401 unordered list is not rendering bullet points in description_html on database page fgregg 536941 open 0     2 2021-07-22T13:24:18Z 2021-10-23T13:09:10Z   CONTRIBUTOR  

Thanks for this tremendous package, @simonw!

In the description_html for a database, I have an unordered list.

However, on the database page on the deployed site, it is not rendering this as a bulleted list.

Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5

The documentation gives an example of using an unordered list in a description_html, so I expected this will work.

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1401/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
959710008 MDU6SXNzdWU5NTk3MTAwMDg= 1419 `publish cloudrun` should deploy a more recent SQLite version fgregg 536941 open 0     3 2021-08-04T00:45:55Z 2021-08-05T03:23:24Z   CONTRIBUTOR  

I recently changed from deploying a datasette using datasette publish heroku to datasette publish cloudrun. A query that ran on the heroku site, now throws a syntax error on the cloudrun site.

I suspect this is because they are running different versions of sqlite3.

  • Heroku: sqlite3 3.31.1 (-/versions)
  • Cloudrun: sqlite3 3.27.2 (-/versions)

If so, it would be great to

  1. harmonize the sqlite3 versions across platforms
  2. update the docker files so as to update the sqlite3 version for cloudrun
datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1419/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   
951185411 MDU6SXNzdWU5NTExODU0MTE= 1402 feature request: social meta tags fgregg 536941 open 0     2 2021-07-23T01:57:23Z 2021-07-26T19:31:41Z   CONTRIBUTOR  

it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance

datasette 107914493 issue    
{
    "url": "https://api.github.com/repos/simonw/datasette/issues/1402/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
   

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [pull_request] TEXT,
   [body] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
, [active_lock_reason] TEXT, [performed_via_github_app] TEXT, [reactions] TEXT, [draft] INTEGER, [state_reason] TEXT);
CREATE INDEX [idx_issues_repo]
                ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
                ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
                ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
                ON [issues] ([user]);
Powered by Datasette · Queries took 46.512ms · About: github-to-sqlite
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows