id,node_id,number,title,user,state,locked,assignee,milestone,comments,created_at,updated_at,closed_at,author_association,pull_request,body,repo,type,active_lock_reason,performed_via_github_app,reactions,draft,state_reason
2028698018,I_kwDOBm6k_c5463mi,2213,feature request: gzip compression of database downloads,536941,open,0,,,1,2023-12-06T14:35:03Z,2023-12-06T15:05:46Z,,CONTRIBUTOR,,"At the bottom of database pages, datasette gives users the opportunity to download the underlying sqlite database. It would be great if that could be served gzip compressed.
this is similar to #1213, but for me, i don't need datasette to compress html and json because my CDN layer does it for me, however, cloudflare at least, will not compress a mimetype of ""application""
(see list of mimetype: https://developers.cloudflare.com/speed/optimization/content/brotli/content-compression/)",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2213/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
959137143,MDU6SXNzdWU5NTkxMzcxNDM=,1415,feature request: document minimum permissions for service account for cloudrun,536941,open,0,,,4,2021-08-03T13:48:43Z,2023-11-05T16:46:59Z,,CONTRIBUTOR,,"Thanks again for such a powerful project.
For deploying to cloudrun from github actions, I'd like to create a service account with minimal permissions.
It would be great to document what those minimum permission that need to be set in the IAM.
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1415/reactions"", ""total_count"": 1, ""+1"": 1, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1163369515,I_kwDOBm6k_c5FV5wr,1655,query result page is using 400mb of browser memory 40x size of html page and 400x size of csv data,536941,open,0,,,8,2022-03-09T00:56:40Z,2023-10-17T21:53:17Z,,CONTRIBUTOR,,"[this page](https://labordata.bunkum.us/opdr-8335ea3?sql=with+most_recent_lu+as+%28%0D%0A++select%0D%0A++++*%0D%0A++from%0D%0A++++%28%0D%0A++++++select%0D%0A++++++++*%0D%0A++++++from%0D%0A++++++++lm_data%0D%0A++++++order+by%0D%0A++++++++f_num%2C%0D%0A++++++++receive_date+desc%0D%0A++++%29+t%0D%0A++group+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++aff_abbr+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+abbr_local_name%2C%0D%0A++coalesce%28%0D%0A++++regexp_match%28%27%28.*%3F%29%28%2C%3F+AFL-CIO%24%29%27%2C+union_name%29%2C%0D%0A++++regexp_match%28%27%28.*%3F%29%28+IND%24%29%27%2C+union_name%29%2C%0D%0A++++union_name%0D%0A++%29+%7C%7C+coalesce%28%27+local+%27+%7C%7C+desig_num%2C+%27+%27+%7C%7C+unit_name%29+as+full_local_name%2C%0D%0A++*%0D%0Afrom%0D%0A++most_recent_lu%0D%0Awhere+%28desig_num+IS+NOT+NULL+OR+unit_name+IS+NOT+NULL%29+AND+desig_name+%21%3D+%27HQ%27%0D%0Alimit%0D%0A++5000+offset+0)
is using about 400 mb in firefox 97 on mac os x. if you download the html for the page, it's about 11mb and if you get the csv for the data its about 1mb.
it's using over a 1G on chrome 99.
i found this because, i was trying to figure out why editing the SQL was getting very slow.
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1655/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1822813627,I_kwDOBm6k_c5spe27,2108,some (many?) SQL syntax errors are not throwing errors with a .csv endpoint,536941,open,0,,,0,2023-07-26T16:57:45Z,2023-07-26T16:58:07Z,,CONTRIBUTOR,,"here's a CTE query that should always fail with a syntax error:
```sql
with foo as (nonsense)
select
*
from
foo;
```
when we make this query against the default endpoint, we do indeed get a 400 status code the problem is returned to the user: https://global-power-plants.datasettes.com/global-power-plants?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B
but, if we use the csv endpoint, we get a 200 status code and no indication of a problem: https://global-power-plants.datasettes.com/global-power-plants.csv?sql=with+foo+as+%28nonsense%29+select+*+from+foo%3B
same with this bad sql
```sql
select
a,
from
foo;
```
https://global-power-plants.datasettes.com/global-power-plants?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B
vs
https://global-power-plants.datasettes.com/global-power-plants.csv?sql=select%0D%0A++a%2C%0D%0Afrom%0D%0A++foo%3B
but, datasette catches this bad sql at both endpoints:
```sql
slect
a
from
foo;
```
https://global-power-plants.datasettes.com/global-power-plants?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B
https://global-power-plants.datasettes.com/global-power-plants.csv?sql=slect%0D%0A++a%0D%0Afrom%0D%0A++foo%3B
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/2108/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1595340692,I_kwDOCGYnMM5fFveU,530,"add ability to configure ""on delete"" and ""on update"" attributes of foreign keys:",536941,open,0,,,2,2023-02-22T15:44:14Z,2023-05-08T20:39:01Z,,CONTRIBUTOR,,"sqlite supports these, and it would be quite nice to be able to add them with sqlite-utils.
https://www.sqlite.org/foreignkeys.html#fk_actions",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/530/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1560651350,I_kwDOCGYnMM5dBaZW,523,Feature request: trim all leading and trailing white space for all columns for all tables in a database,536941,open,0,,,1,2023-01-28T02:40:10Z,2023-01-28T02:41:14Z,,CONTRIBUTOR,,"It's pretty common that i need to trim leading or trailing white space from lots of columns in a database a part of an initial ETL.
I use the following recipe a lot, and it would be great to include this functionality into sqlite-utils
`trimify.sql`
```sql
select 'select group_concat(''update [' || name || '] set ['' || name || ''] = trim(['' || name || ''])'', '';
'') || '';
'' as sql_to_run from pragma_table_info('''||name||''');' from sqlite_schema;
```
then something like:
```bash
sqlite3 example.db < scripts/trimify.sql > table_trim.sql && \
sqlite3 $example.db < table_trim.sql > trim.sql && \
sqlite3 $example.db < trim.sql
```",140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/523/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1509783085,I_kwDOBm6k_c5Z_XYt,1969,sql-formatter javascript is not now working with CloudFlare rocketloader,536941,open,0,,,0,2022-12-23T21:14:06Z,2023-01-10T01:56:33Z,,CONTRIBUTOR,,"This is probably not a bug with datasette, but I thought you might want to know, @simonw.
I noticed today that my CloudFlare proxied datasette instance lost the ""Format SQL"" option. I'm pretty sure it was there last week.
In the CloudFlare settings, if I turn off [Rocket Loader](https://developers.cloudflare.com/fundamentals/speed/rocket-loader/), I get the ""Format SQL"" option back.
Rocket Loader works by asynchronously loading the javascript, so maybe there was a recent change that doesn't play well with the asynch loading?
I'm up to date with https://github.com/simonw/datasette/commit/e03aed00026cc2e59c09ca41f69a247e1a85cc89",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1969/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1400374908,I_kwDOBm6k_c5TeAZ8,1836,docker image is duplicating db files somehow,536941,open,0,,,13,2022-10-06T22:35:54Z,2022-10-08T16:56:51Z,,CONTRIBUTOR,,"if you look into the docker image created by docker publish, the `datasette inspect` line is duplicating the db files.
here's the result of the inspect command:
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1836/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1310243385,I_kwDOCGYnMM5OGLo5,456,feature request: pivot command,536941,open,0,,,5,2022-07-20T00:58:08Z,2022-07-20T17:50:50Z,,CONTRIBUTOR,,pivoting long-format table to wide-format tables is pretty common and kind of pain. would love to see this feature in sqlite-utils!,140912432,issue,,,"{""url"": ""https://api.github.com/repos/simonw/sqlite-utils/issues/456/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1200224939,I_kwDOBm6k_c5Hifqr,1707,[feature] expanded detail page,536941,open,0,,,1,2022-04-11T16:29:17Z,2022-04-11T16:33:00Z,,CONTRIBUTOR,,"Right now, if click on the detail page for a row you get the info for the row and links to related tables:
![Screenshot 2022-04-11 at 12-27-26 lm20 filing](https://user-images.githubusercontent.com/536941/162786802-90ac1a71-4624-47c4-ae55-b783f4f6c92d.png)
It would be very cool if there was an option to expand the rows of the related tables from within this detail view.
If you had that then datasette could fulfill a pretty common use case where you want to search for an entity and get a consolidate detail view about what you know about that entity.
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1707/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1077620955,I_kwDOBm6k_c5AOzDb,1549,Redesign CSV export to improve usability,536941,open,0,,3268330,5,2021-12-11T19:02:12Z,2022-04-04T11:17:13Z,,CONTRIBUTOR,,"*Original title: Set content type for CSV so that browsers will attempt to download instead opening in the browser*
Right now, if the user clicks on the CSV related to a table or a query, the response header for the content type is
""content-type: text/plain; charset=utf-8""
Most browsers will try to open a file with this content-type in the browser.
This is not what most people want to do, and lots of folks don't know that if they want to download the CSV and open it in the a spreadsheet program they next need to save the page through their browser.
It would be great if the response header could be something like
```
'Content-type: text/csv');
'Content-disposition: attachment;filename=MyVerySpecial.csv');
```
which would lead browsers to open a download dialog.
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1549/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1096536240,I_kwDOBm6k_c5BW9Cw,1586,run analyze on all databases as part of start up or publishing,536941,open,0,,,1,2022-01-07T17:52:34Z,2022-02-02T07:13:37Z,,CONTRIBUTOR,,"Running `analyze;` lets sqlite's query planner make *much* better use of any indices.
It might be nice if the analyze was run as part of the start up of ""serve"" or ""publish"".",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1586/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1090810196,I_kwDOBm6k_c5BBHFU,1583,consider adding deletion step of cloudbuild artifacts to gcloud publish,536941,open,0,,,1,2021-12-30T00:33:23Z,2021-12-30T00:34:16Z,,CONTRIBUTOR,,"right now, as part of the the publish process images and other artifacts are stored to gcloud's cloud storage before being deployed to cloudrun.
after successfully deploying, it would be nice if the the script deleted these artifacts. otherwise, if you have regularly scheduled build process, you can end up paying to store lots of out of date artifacts.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1583/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
1079111498,I_kwDOBm6k_c5AUe9K,1553,if csv export is truncated in non streaming mode set informative response header,536941,open,0,,,3,2021-12-13T22:50:44Z,2021-12-16T19:17:28Z,,CONTRIBUTOR,,"streaming mode is currently not enabled for custom queries, so the queries will be truncated to max row limit.
it would be great if a response is truncated that an header signalling that was set in the header.
i need to write some pagination code for getting full results back for a custom query and it would make the code much better if i could reliably known when there is nothing more to limit/offset ",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1553/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
950664971,MDU6SXNzdWU5NTA2NjQ5NzE=,1401,unordered list is not rendering bullet points in description_html on database page,536941,open,0,,,2,2021-07-22T13:24:18Z,2021-10-23T13:09:10Z,,CONTRIBUTOR,,"Thanks for this tremendous package, @simonw!
In the `description_html` for a database, I [have an unordered list](https://github.com/labordata/warehouse/blob/fcea4502e5b615b0eb3e0bdcb45ec634abe20bb6/warehouse_metadata.yml#L19-L22).
However, on the database page on the deployed site, it is not rendering this as a bulleted list.
![Screenshot 2021-07-22 at 09-21-51 nlrb](https://user-images.githubusercontent.com/536941/126645923-2777b7f1-fd4c-4d2d-af70-a35e49a07675.png)
Page here: https://labordata-warehouse.herokuapp.com/nlrb-9da4ae5
The documentation gives an [example of using an unordered list](https://docs.datasette.io/en/stable/metadata.html#using-yaml-for-metadata) in a `description_html`, so I expected this will work.",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1401/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
959710008,MDU6SXNzdWU5NTk3MTAwMDg=,1419,`publish cloudrun` should deploy a more recent SQLite version,536941,open,0,,,3,2021-08-04T00:45:55Z,2021-08-05T03:23:24Z,,CONTRIBUTOR,,"I recently changed from deploying a datasette using `datasette publish heroku` to `datasette publish cloudrun`. [A query that ran on the heroku site](https://odpr.bunkum.us/odpr-6c2f4fc?sql=with+pivot_members+as+%28%0D%0A++select%0D%0A++++f_num%2C%0D%0A++++max%28union_name%29+as+union_name%2C%0D%0A++++max%28aff_abbr%29+as+abbreviation%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2007%0D%0A++++%29+as+%222007%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2008%0D%0A++++%29+as+%222008%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2009%0D%0A++++%29+as+%222009%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2010%0D%0A++++%29+as+%222010%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2011%0D%0A++++%29+as+%222011%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2012%0D%0A++++%29+as+%222012%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2013%0D%0A++++%29+as+%222013%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2014%0D%0A++++%29+as+%222014%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2015%0D%0A++++%29+as+%222015%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2016%0D%0A++++%29+as+%222016%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2017%0D%0A++++%29+as+%222017%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2018%0D%0A++++%29+as+%222018%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2019%0D%0A++++%29+as+%222019%22%0D%0A++from%0D%0A++++lm_data%0D%0A++++left+join+ar_membership+using+%28rpt_id%29%0D%0A++where%0D%0A++++members+%21%3D+%27%27%0D%0A++++and+trim%28desig_name%29+%3D+%27NHQ%27%0D%0A++++and+union_name+not+in+%28%0D%0A++++++%27AFL-CIO%27%2C%0D%0A++++++%27CHANGE+TO+WIN%27%2C%0D%0A++++++%27FOOD+ALLIED+SVC+TRADES+DEPT+AFL-CIO%27%0D%0A++++%29%0D%0A++++and+f_num+not+in+%28387%2C+296%2C+123%2C+347%2C+531897%2C+30410%2C+49%29%0D%0A++++and+lower%28ar_membership.category%29+IN+%28%0D%0A++++++%27active+members%27%2C%0D%0A++++++%27active+professional%27%2C%0D%0A++++++%27regular+members%27%2C%0D%0A++++++%27full+time+member%27%2C%0D%0A++++++%27full+time+members%27%2C%0D%0A++++++%27active+education+support+professional%27%2C%0D%0A++++++%27active%27%2C%0D%0A++++++%27active+member%27%2C%0D%0A++++++%27members%27%2C%0D%0A++++++%27see+item+69%27%2C%0D%0A++++++%27regular%27%2C%0D%0A++++++%27dues+paying+members%27%2C%0D%0A++++++%27building+trades+journeyman%27%2C%0D%0A++++++%27full+per+capita+tax+payers%27%2C%0D%0A++++++%27active+postal%27%2C%0D%0A++++++%27active+membership%27%2C%0D%0A++++++%27full-time+member%27%2C%0D%0A++++++%27regular+active+member%27%2C%0D%0A++++++%27journeyman%27%2C%0D%0A++++++%27member%27%2C%0D%0A++++++%27journeymen%27%2C%0D%0A++++++%27one+half+per+capita+tax+payers%27%2C%0D%0A++++++%27schedule+1%27%2C%0D%0A++++++%27active+memebers%27%2C%0D%0A++++++%27active+members+-+us%27%2C%0D%0A++++++%27dues+paying+membership%27%0D%0A++++%29%0D%0A++GROUP+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++pivot_members%0D%0Aorder+by%0D%0A++%222019%22+desc%3B), now throws a syntax error on the [cloudrun site](https://labordata.bunkum.us/odpr-6c2f4fc?sql=with+pivot_members+as+%28%0D%0A++select%0D%0A++++f_num%2C%0D%0A++++max%28union_name%29+as+union_name%2C%0D%0A++++max%28aff_abbr%29+as+abbreviation%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2007%0D%0A++++%29+as+%222007%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2008%0D%0A++++%29+as+%222008%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2009%0D%0A++++%29+as+%222009%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2010%0D%0A++++%29+as+%222010%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2011%0D%0A++++%29+as+%222011%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2012%0D%0A++++%29+as+%222012%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2013%0D%0A++++%29+as+%222013%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2014%0D%0A++++%29+as+%222014%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2015%0D%0A++++%29+as+%222015%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2016%0D%0A++++%29+as+%222016%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2017%0D%0A++++%29+as+%222017%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2018%0D%0A++++%29+as+%222018%22%2C%0D%0A++++sum%28number%29+filter+%28%0D%0A++++++where%0D%0A++++++++yr_covered+%3D+2019%0D%0A++++%29+as+%222019%22%0D%0A++from%0D%0A++++lm_data%0D%0A++++left+join+ar_membership+using+%28rpt_id%29%0D%0A++where%0D%0A++++members+%21%3D+%27%27%0D%0A++++and+trim%28desig_name%29+%3D+%27NHQ%27%0D%0A++++and+union_name+not+in+%28%0D%0A++++++%27AFL-CIO%27%2C%0D%0A++++++%27CHANGE+TO+WIN%27%2C%0D%0A++++++%27FOOD+ALLIED+SVC+TRADES+DEPT+AFL-CIO%27%0D%0A++++%29%0D%0A++++and+f_num+not+in+%28387%2C+296%2C+123%2C+347%2C+531897%2C+30410%2C+49%29%0D%0A++++and+lower%28ar_membership.category%29+IN+%28%0D%0A++++++%27active+members%27%2C%0D%0A++++++%27active+professional%27%2C%0D%0A++++++%27regular+members%27%2C%0D%0A++++++%27full+time+member%27%2C%0D%0A++++++%27full+time+members%27%2C%0D%0A++++++%27active+education+support+professional%27%2C%0D%0A++++++%27active%27%2C%0D%0A++++++%27active+member%27%2C%0D%0A++++++%27members%27%2C%0D%0A++++++%27see+item+69%27%2C%0D%0A++++++%27regular%27%2C%0D%0A++++++%27dues+paying+members%27%2C%0D%0A++++++%27building+trades+journeyman%27%2C%0D%0A++++++%27full+per+capita+tax+payers%27%2C%0D%0A++++++%27active+postal%27%2C%0D%0A++++++%27active+membership%27%2C%0D%0A++++++%27full-time+member%27%2C%0D%0A++++++%27regular+active+member%27%2C%0D%0A++++++%27journeyman%27%2C%0D%0A++++++%27member%27%2C%0D%0A++++++%27journeymen%27%2C%0D%0A++++++%27one+half+per+capita+tax+payers%27%2C%0D%0A++++++%27schedule+1%27%2C%0D%0A++++++%27active+memebers%27%2C%0D%0A++++++%27active+members+-+us%27%2C%0D%0A++++++%27dues+paying+membership%27%0D%0A++++%29%0D%0A++GROUP+by%0D%0A++++f_num%0D%0A%29%0D%0Aselect%0D%0A++*%0D%0Afrom%0D%0A++pivot_members%0D%0Aorder+by%0D%0A++%222019%22+desc%3B).
I suspect this is because they are running different versions of sqlite3.
- Heroku: sqlite3 3.31.1 ([-/versions](https://odpr.bunkum.us/-/versions))
- Cloudrun: sqlite3 3.27.2 ([-/versions](https://labordata.bunkum.us/-/versions))
If so, it would be great to
1. harmonize the sqlite3 versions across platforms
2. update the docker files so as to update the sqlite3 version for cloudrun",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1419/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,
951185411,MDU6SXNzdWU5NTExODU0MTE=,1402,feature request: social meta tags,536941,open,0,,,2,2021-07-23T01:57:23Z,2021-07-26T19:31:41Z,,CONTRIBUTOR,,"it would be very nice if the twitter, slack, and other social media could make rich cards when people post a link to a datasette instance
",107914493,issue,,,"{""url"": ""https://api.github.com/repos/simonw/datasette/issues/1402/reactions"", ""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,